Unexpected scans during delete operation using WHERE IN
"I'm more wondering why the query optimizer would ever use the plan it currently does."
To put it another way, the question is why the following plan looks cheapest to the optimizer, compared with the alternatives (of which there are many).
The inner side of the join is essentially running a query of the following form for each correlated value of BrowserID
:
DECLARE @BrowserID smallint;
SELECT
tfsph.BrowserID
FROM dbo.tblFEStatsPaperHits AS tfsph
WHERE
tfsph.BrowserID = @BrowserID
OPTION (MAXDOP 1);
Note that the estimated number of rows is 185,220 (not 289,013) since the equality comparison implicitly excludes NULL
(unless ANSI_NULLS
is OFF
). The estimated cost of the above plan is 206.8 units.
Now let's add a TOP (1)
clause:
DECLARE @BrowserID smallint;
SELECT TOP (1)
tfsph.BrowserID
FROM dbo.tblFEStatsPaperHits AS tfsph
WHERE
tfsph.BrowserID = @BrowserID
OPTION (MAXDOP 1);
The estimated cost is now 0.00452 units. The addition of the Top physical operator sets a row goal of 1 row at the Top operator. The question then becomes how to derive a 'row goal' for the Clustered Index Scan; that is, how many rows should the scan expect to process before one row matches the BrowserID
predicate?
The statistical information available shows 166 distinct BrowserID
values (1/[All Density] = 1/0.006024096 = 166). Costing assumes that the distinct values are distributed uniformly over the physical rows, so the row goal on the Clustered Index Scan is set to 166.302 (accounting for the change in table cardinality since the sampled statistics were gathered).
The estimated cost of scanning the expected 166 rows is not very large (even executed 339 times, once for each change of BrowserID
) - the Clustered Index Scan shows an estimated cost of 1.3219 units, showing the scaling effect of the row goal. The unscaled operator costs for I/O and CPU are shown as 153.931, and 52.8698 respectively:
In practice, it is very unlikely that the first 166 rows scanned from the index (in whatever order they happen to be returned) will contain one each of the possible BrowserID
values. Nevertheless, the DELETE
plan is costed at 1.40921 units total, and is selected by the optimizer for that reason. Bart Duncan shows another example of this type in a recent post titled Row Goals Gone Rogue.
It is also interesting to note that the Top operator in the execution plan is not associated with the Anti Semi Join (in particular the 'short-circuiting' Martin mentions). We can start to see where the Top comes from by first disabling an exploration rule called GbAggToConstScanOrTop:
DBCC RULEOFF ('GbAggToConstScanOrTop');
GO
DELETE FROM tblFEStatsBrowsers
WHERE BrowserID NOT IN
(
SELECT DISTINCT BrowserID
FROM tblFEStatsPaperHits WITH (NOLOCK)
WHERE BrowserID IS NOT NULL
) OPTION (MAXDOP 1, LOOP JOIN, RECOMPILE);
GO
DBCC RULEON ('GbAggToConstScanOrTop');
That plan has an estimated cost of 364.912, and shows that the Top replaced a Group By Aggregate (grouping by the correlated column BrowserID
). The aggregate is not due to the redundant DISTINCT
in the query text: it is an optimization that can be introduced by two exploration rules, LASJNtoLASJNonDist and LASJOnLclDist. Disabling those two as well produces this plan:
DBCC RULEOFF ('LASJNtoLASJNonDist');
DBCC RULEOFF ('LASJOnLclDist');
DBCC RULEOFF ('GbAggToConstScanOrTop');
GO
DELETE FROM tblFEStatsBrowsers
WHERE BrowserID NOT IN
(
SELECT DISTINCT BrowserID
FROM tblFEStatsPaperHits WITH (NOLOCK)
WHERE BrowserID IS NOT NULL
) OPTION (MAXDOP 1, LOOP JOIN, RECOMPILE);
GO
DBCC RULEON ('LASJNtoLASJNonDist');
DBCC RULEON ('LASJOnLclDist');
DBCC RULEON ('GbAggToConstScanOrTop');
That plan has an estimated cost of 40729.3 units.
Without the transformation from Group By to Top, the optimizer 'naturally' chooses a hash join plan with BrowserID
aggregation before the anti semi join:
DBCC RULEOFF ('GbAggToConstScanOrTop');
GO
DELETE FROM tblFEStatsBrowsers
WHERE BrowserID NOT IN
(
SELECT DISTINCT BrowserID
FROM tblFEStatsPaperHits WITH (NOLOCK)
WHERE BrowserID IS NOT NULL
) OPTION (MAXDOP 1, RECOMPILE);
GO
DBCC RULEON ('GbAggToConstScanOrTop');
And without the MAXDOP 1 restriction, a parallel plan:
Another way to 'fix' the original query would be to create the missing index on BrowserID
that the execution plan reports. Nested loops work best with when the inner side is indexed. Estimating cardinality for semi joins is challenging at the best of times. Not having proper indexing (the large table doesn't even have a unique key!) will not help at all.
I wrote more about this in Row Goals, Part 4: The Anti Join Anti Pattern.
When I run your script to create a statistics only database and the query in the question I get the following plan.
The Table Cardinalities shown in the plan are
tblFEStatsPaperHits
:48063400
tblFEStatsBrowsers
:339
So it estimates that it will need to perform the scan on tblFEStatsPaperHits
339 times. Each scan has the correlated predicate tblFEStatsBrowsers.BrowserID=tblFEStatsPaperHits.BrowserID AND tblFEStatsPaperHits.BrowserID IS NOT NULL
that is pushed down into the scan operator.
The plan doesn't mean that there will be 339 full scans however. As it is under an anti semi join operator as soon as the first matching row on each scan is found it can short circuit the rest of it. The estimated subtree cost for this node is 1.32603
and the entire plan is costed at 1.41337
.
For the Hash Join it gives the plan below
The overall plan is costed at 418.415
(about 300 times more expensive than the nested loops plan) with the single full clustered index scan on tblFEStatsPaperHits
costed at 206.8
alone. Compare this with the 1.32603
estimate for 339 partial scans given earlier (Average partial scan estimated cost = 0.003911592
).
So this would indicate that it is costing each partial scan as being 53,000 times less expensive than a full scan. If the costings were to scale linearly with row count then that would mean that it is assuming that on average it would only need to process 900 rows on each iteration before it finds a matching row and can short circuit.
I don't think the costings do scale in that linear way however. I think they also incorporate some element of fixed startup cost. Trying various values of TOP
in the following query
SELECT TOP 147 BrowserID
FROM [dbo].[tblFEStatsPaperHits]
147
gives the closest estimated subtree cost to 0.003911592
at 0.0039113
. Either way it is clear that it is basing the costing on the assumption that each scan will only have to process a tiny proportion of the table, in the order of hundreds of rows rather than millions.
I'm not sure exactly what maths it bases this assumption on and it doesn't really add up with the row count estimates in the rest of the plan (The 236 estimated rows coming out of the nested loops join would imply that there were 236 cases where no matching row was found at all and a full scan was required). I assume this is just a case where the modelling assumptions made fall down somewhat and leave the nested loops plan significantly under costed.
In my book even one scan of 50M rows is unacceptable... My usual trick is to materialize the distinct values and delegate the engine with keeping it up to date:
create view [dbo].[vwFEStatsPaperHitsBrowserID]
with schemabinding
as
select BrowserID, COUNT_BIG(*) as big_count
from [dbo].[tblFEStatsPaperHits]
group by [BrowserID];
go
create unique clustered index [cdxVwFEStatsPaperHitsBrowserID]
on [vwFEStatsPaperHitsBrowserID]([BrowserID]);
go
This gives you a materialized index one row per BrowserID, eliminating the need to scan 50M rows. The engine will maintain it for you and the QO will use it 'as-is' in the statement you posted (w/o any hint or query rewrite).
The downside is of course contention. Any insert or delete operation in tblFEStatsPaperHits
(and I guess is a logging table with heavy inserts) will have to serialize access to a given BrowserID. There are ways that make this workable (delayed updates, 2 staged logging etc) if you're willing to buy into it.