Factors to consider while Rebuild/Reorganize of index in SQL Server 2012
I'm going to go in a different direction than other answers and ask: After your defragmentation routine has run for however many minutes or hours, and used all sorts of resources (CPU, memory, disk, probably tempdb), how do you quantify and compare the expenditure of time and server resources to defragment indexes to any improvements you've gained from doing so?
I realize this doesn't answer your question, but I want you to think carefully about what you're doing rather than get yourself all worked up and your server all beat up about a non-issue.
There's a weird obsession (and believe me, I used to have it, too) with SQL Server users around index fragmentation. It's usually based on advice from Microsoft from 15 years ago and lots of hysterical responses on Q&A forums (this one included).
People have had the numbers you reference (5% to reorg, 30% to rebuild) pounded into their head with religious fervor forever. So they keep doing it. I've seen it cause all sorts of problems, too. Blocking and deadlocks, corruption, interruptions to production workload (long running maintenance), and worst of all: performing index maintenance at the exclusion of more important maintenance, like DBCC CHECKDB
.
The one good thing that index rebuilds do is update statistics, which (simplifying greatly) will usually give the optimizer much better information about the data it's working with, and (again, simplifying greatly) can chase a bad execution plan out of the cache.
People will also do counterproductive things in maintenance plans like rebuild all their indexes, reorganize all their indexes, and then update statistics. I realize this likely isn't you, just adding it for completeness.
There are also more considerations than just fragmentation percent to rebuild or reorg:
- Do queries even use this index?
- Do I really want to reorg and LOB compact a 50+ GB index (single threaded, ew)?
- Am I on Standard Edition where rebuilds are OFFLINE and can cause blocking?
For a bit of background, check out the blog posts here. Full disclosure; it's the company I work for, though people other than me have written a lot about it. There's even a post in there about when index fragmentation does matter.
Other database platforms have had better advice about this topic for a long time.
EDIT: The best way to have index fragmentation never ever in a million billion years matter: cache all your data in RAM.
Here are some of the differences that could help you decide which one to use
- Rebuild can use multiple CPU whereas Reorganize is always single threaded
- Rebuild Update index Statistics, Reorganize does not
- Rebuild is faster for heavily Fragmented indexes compared to Reorganize and vice versa.
- Can set FILLFACTOR with Rebuild, but cant with Reorganize
- Can reduce transaction log usage by switching to BULK_Logged recovery Model when using index rebuild, where as Reorganize is fully logged
- Reorganize is always online, Rebuild can choose between online and offline (online is an Enterprise only feature, but will still require a small index outage while the newly rebuilt one is swapped in)
- Rebuild has to create a new index before dropping the old one, so it needs more space(approx 1.2 times the space index) compared to Reorganize which works on one page at a time so only required 8KB of free space
- Reorganize can be stopped half way through and you do not lose the work done till then, where as rebuild has to rollback the entire transaction if stopped in the middle.
- Rebuild does not operate on LOB data , but Reorganize compacts LOB data by default
As long as avg_fragmentation_in_percent is concerned, I wouldn't give much importance to that number. Even if you rebuild the index and get rid of fragmentation how do you make sure the pages are next to each other on your storage. Whereas if you pages are only half full then its bad cause it might not cost that much to store it on disk but it will waste precious memory when its read on to memory. That is wasted memory.
You're right to consider both Avg_fragmentation_in_percent and Avg_page_space_used_in_percent when considering whether to do reorganise, rebuild, or nothing.
Even if Avg_fragmentation_in_percent is low, a low Avg_page_space_used_in_percent may gain benefit from reorganisation (due to the extra IO and cache resources used by unfilled pages).
And the "greater than 30% = rebuild" advise may be better read as "if Avg_fragmentation_in_percent > 30% and you have been regularly reorganising, then rebuild". If you haven't been reorganising, then try that first, since as your test shows, that may be all that's needed even with very high Avg_fragmentation_in_percent.
As for the exact trigger levels to use, that really depends on your data and how it's used, and on various other factors like those jesijesi posted.