Determining the tables and impact of a large number of SPs

A scientific approach is based on routinely collecting process and procedure performance information, either using a commercial package or a more home-grown solution. You could also start completely from scratch collecting information from Profiler or Extended Events. The important thing is to capture data regularly, and to make it easily consumable (e.g. using SSRS).

Usable historical information makes it easy to track gradual performance changes over time, anticipate increased resource requirements before they occur, diagnose sudden changes, and identify and test areas for improvement.

On that latter point, I believe your approach should be something like this:

  1. Establish good baseline information over a reasonable period of time
  2. Identify areas that might benefit from compression
  3. Test both ROW andPAGE compression in the improvement target area
  4. Test aggregate changes on a representative workload
  5. Refine or regress changes that were not beneficial
  6. Implement the change and monitor the effects in production

This is a lot easier than attempting to track down performance regressions in complex code after the fact, without a baseline to compare with.


If you know for sure that these 35 programmables are the culprits a combination of Profile/Events for elapsed time and cached plans, plus sys.dm_exec_query_stats should get you a long way to understanding where about the pain lies.

@Paul's comment about recording a baseline is important, though.