Why would SQL Server ignore an index?
Typically indexes will be used by SQL Server if it deems it more expedient to use the index than to directly use the underlying table.
It would seem likely the cost-based optimizer thinks it would be more expensive to actually use the index in question. You may see it use the index if instead of doing SELECT *
, you simply SELECT T1Col1
.
When you SELECT *
you are telling SQL Server to return all columns in the table. To return those columns SQL Server must read the pages for the rows that match the WHERE
statement criteria from the table itself (clustered index or heap). SQL Server is probably thinking the amount of reads required to get the rest of the columns from the table means it might as well scan the table directly. It would be useful to see the actual query and the actual execution plan used by the query.
For using the index, because you're doing select *
, then SQL Server must first read each of the rows from the index that match the value you have in the where clause. Based on this, it will get the clustered index values for each of the row, and then it has to seek each of them separately from the clustered index (=key lookup). Since you said that the values are not unique, SQL Server uses statistics to estimate how many times it has to do this key lookup.
Most likely the cost estimate for scanning the non-clustered index + key lookups exceeds the cost estimate for clustered index scan, and that's why the index is ignored.
You could try to use set statistics io on
and then use an index hint to see if the I/O cost is actually smaller when using the index or not. If the difference is big, you could look into statistics, if those are out of date.
Also, if your SQL is actually using variables and not the exact values, this might also be caused by parameter sniffing (=previous value used to create the plan had a lot of rows in the table).