Deleting many rows without locking them
set the lock level for your delete and updates to a more granular lock mode. note that your transactions will be now be slower.
http://www.postgresql.org/docs/current/static/sql-lock.html
http://www.postgresql.org/docs/current/static/explicit-locking.html
Try a subselect and use a unique condition:
DELETE FROM
table
WHERE
id IN (SELECT id FROM table WHERE key = 'needle' LIMIT 10000);
Frak's answer is good, but this can be faster, but requires 8.4 because of window functions support (pseudocode):
result = query('select
id from (
select id, row_number(*) over (order by id) as row_number
from mytable where key=?
) as _
where row_number%8192=0 order by id, 'needle');
// result contains ids of every 8192nd row which key='needle'
last_id = 0;
result.append(MAX_INT); // guard
for (row in result) {
query('delete from mytable
where id<=? and id>? and key=?, row.id, last_id, 'needle');
// last_id is used to hint query planner,
// that there will be no rows with smaller id
// so it is less likely to use full table scan
last_id = row.id;
}
This is premature optimization — evil thing. Beware.