Why does sql server prefer the nonclustered index over the clustered index?

If SQL Server has two indexes to choose from, both of which satisfy ("cover") the query and provide the best possible path to locating and/or sorting the rows, you should consider it to be a coin flip. It's not, though... I believe there was some research done here (maybe by me, here and here) that showed it picked the most recent one created or first one alphabetically or something that is otherwise arbitrary.

However, if the coin flip as we'll call it involves the choice between a non-clustered index and a clustered index, and again both indexes properly satisfy the query, SQL Server will always lean toward the non-clustered. Why? Because it's guaranteed to be no wider than the clustered index. The edge case where it is exactly the same width as the clustered index is not a consideration.

You should look at the costs involved with each execution plan, and confirm that the costs SQL Server estimates for the non-clustered index are <= those for the clustered index. If you can show a counter-example, where the non-clustered index is chosen even though its estimated costs are higher than the clustered, please do.


The best way to get SQL Server to categorically stop using the non-clustered index would be to drop the index. Your question is unclear about why you want to read the clustered index instead of the non-clustered index.

SQL Server believes it will be faster to use the non-clustered index, while still returning the exact results you need. So, why do you want SQL Server to return results more slowly by scanning the clustered index?

Add the exact query to your question, and upload the plans to https://www.brentozar.com/pastetheplan