Determining the tables and impact of a large number of SPs
A scientific approach is based on routinely collecting process and procedure performance information, either using a commercial package or a more home-grown solution. You could also start completely from scratch collecting information from Profiler or Extended Events. The important thing is to capture data regularly, and to make it easily consumable (e.g. using SSRS
).
Usable historical information makes it easy to track gradual performance changes over time, anticipate increased resource requirements before they occur, diagnose sudden changes, and identify and test areas for improvement.
On that latter point, I believe your approach should be something like this:
- Establish good baseline information over a reasonable period of time
- Identify areas that might benefit from compression
- Test both
ROW
andPAGE
compression in the improvement target area - Test aggregate changes on a representative workload
- Refine or regress changes that were not beneficial
- Implement the change and monitor the effects in production
This is a lot easier than attempting to track down performance regressions in complex code after the fact, without a baseline to compare with.
If you know for sure that these 35 programmables are the culprits a combination of Profile/Events for elapsed time and cached plans, plus sys.dm_exec_query_stats should get you a long way to understanding where about the pain lies.
@Paul's comment about recording a baseline is important, though.