Slow query with where clause

My guess is that the slow execution plan is doing the rate>0 filter in an unfortunate way, like as part of a scan on the inside of a loop join or something.

If it comes down to it, one solution would be to store an intermediate result set and filter it in a separate statement.

I suggest this with the understanding that you can't make changes to your vendor's database and that you are basically stuck. This is essentially taking away some control from the optimizer -- something you don't typically want to do -- and is also adding a relatively small amount of overhead with the creation of a temp table. But it should alleviate the slowness in this case. I would continue to work with your vendor on indexing strategies if possible.

select a.date, b.rate, c.type, a.value 
into #t
from a inner join b on a.id = b.aid
c inner join b.id = c.bid
where a.name = 'xxx' 

select * from #t where rate>0

First of all let me explain why you got a bad plan. One thing is what I could think as of now. when there was no rate.The table a was first and based on sarg and index say it estimated 10000 rows. This might have used a merge join.Now to join with table b (as of now assuming there is 1 to n mapping and on avg there are 2 rows for each row form a). Then the estimated rows after join is 20000 and then it is being joined with the table C.Now based on how big is the table C it could have used say Merge Join.Say actual rows were in 100's of thousands

But when you added the SARG rate > 0. Then the optimizer has not estimated 20K rows after join of A and B but instead estimated 6667 rows(30% as default for > 0 when there are no auto_create_statistics on) and then it might have chosen say nested loop join with the table C instead of merge join.But actual rows could have been much more and thus nested loop join could have been used for 100's of thousands and thus the final nested loop join might be taking this much of time.

In summary what i am trying to say is that because of extra sarg there was improper estimates from optimizer and thus a bad plan.

You do not create index here at all.Index is not the solution for each and every thing there are way too many overhead of having them and in specially your case the less restrictiev query is runing much better so issue is not to do with index but it is more to do with the statistics. Check following

Is auto_create_statistics on or off for your database?Or table b has stats for the column rate? When stats are updated/created for these tables? if not first create statistics for this column and I am sure your plan will be normal. If you can not create statistics then try to force the same plan as you are getting without rate > 0 CTE doesnt improve the code perf as such,there might be excpetion .These are there to make the code much more readable.

Tags:

Sql

Sql Server