SQL Server changes execution plan - Part 2
The most likely explanation is that your sessions have different settings. SQL Server has various session settings that can affect the execution plan selected (and the results!)
The values for these settings can depend on how you connect to SQL Server, since different tools set the options different ways when they connect, and some (like SQL Server Management Studio) allow you to override the defaults as well.
For example:
The image above is reproduced from Erland Sommarskog's definitive article on this topic:
Slow in the Application, Fast in SSMS? Understanding Performance Mysteries
The whole thing is well worth reading, but you should definitely read the section titled, "The Default Settings"
If you make sure all the settings have the same value on all connections, you should get the same execution plans.
For maximum compatibility with features like indexed views, you should ensure your settings are as follows:
Many of these settings are maintained for backward compatibility only. It is strongly recommended you set them as shown in the table above, or use a tool that sets them the right way automatically.
Books Online references:
- SET Statements (Transact-SQL)
- SET ANSI_NULLS (Transact-SQL)
- SET ANSI_PADDING (Transact-SQL)
- SET ANSI_WARNINGS (Transact-SQL)
- SET ARITHABORT (Transact-SQL)
- SET CONCAT_NULL_YIELDS_NULL (Transact-SQL)
- SET NUMERIC_ROUNDABORT (Transact-SQL)
- SET QUOTED_IDENTIFIER (Transact-SQL)
- Create Indexed Views
Update after plans were provided
The slow plan includes:
CardinalityEstimationModelVersion="70"
...whereas the fast plan says:
CardinalityEstimationModelVersion="120"
So the explanation is that one of you is using the original cardinality estimator, and other is using the new SQL Server 2014 CE. The difference in estimated row counts is enough for the new CE to choose a parallel execution plan. Under the original CE, the estimated cost for the serial plan is below the cost threshold for parallelism.
As to why different estimators are being used, I would guess that you have different context databases when the statements are run. One where the compatibility level of the database defaults to the new CE, and one where the original CE is used. The database you are "in" when the query executes determines the CE model, not the database(s) used in the query.
For example, you may have different default databases associated with your logins. If you USE Klasje;
before running the statements, both connections should use the same CE model.
Final update: it turned out the target database was indeed set to an older compatibility level. Running the query with master as the context database produced the better plan. Be aware that changing to use the new CE for all queries may cause regressions. You will need to test your workload before changing the database compatibility level in production.
Just to discount the difference with the MVC app - Have you checked the query it is executing using SQL Profiler?
I had a similar problem recently, and it turns out the query executed through my MVC app (using Entity Framework 6) was executing the SQL statement through sp_executesql
, which was causing SQL Server to use a different execution plan compared to running pure SQL in Management Studio.
We changed it to use a stored procedure rather than LINQ.