When is it OK to shrink a Database?
Is the database going to grow again? If so then the effort you're going to put into the shrink operations are just going to be a waste, because when you've got the file size down and then you add more data, the file will just have to grow again, and transactions have to wait for that growth to happen. If you have sub-optimal auto-growth settings and/or a slow drive this growth activity is going to be quite painful.
If you do shrink the database, what are you going to use the freed up disk space for? Again if you are just going to keep that space free in case this database grows again, then you're just spinning your wheels.
What you might consider doing, now that you've got all this free space in the file, is rebuilding your indexes so that they are better optimized (and it will be much less painful to do this when you have free space to do so - think about trying to change a sweater in a tiny closet vs. a big bedroom).
So unless this was a major cleanup operation and you really won't be ramping up to the same level of data again, I'd just leave it as is and focus on other areas of optimization.
A reorganise-and-shrink is never recommended really.
If you can take the apps the database is serving offline, you can speed up the process and reduce index fragmentation by removing all indexes and primary/foreign key constraints before the shrink (this will mean there is less data to be moved around as only the data pages will be shuffled not the now non-existent index pages, speeding up the process) then recreate all the indexes and keys.
Recreating the indexes after the shrink means they should not be significantly fragmented, and having them gone during the shrink means rebuilding them won't leave many small "holes" in the page allocation within the files that may invite fragmentation later.
Another option if you can offline the applications is to migrate all the data to a fresh database of the same structure. If your build process is solid you should be able to build that blank DB quickly, if not create one from the current DB (restore a backup of the current one, truncate/delete all the contents in the tables and perform a full shrink).
You might still want to drop all the indexes in the destination and recreate them afterwards as this can be a lot more efficient when changing a lot of the indexed data (100% of it in this case). To speed up the copy process, have the datafile(s) of the destination database on different physical drives to the source (unless you are using SSDs in which case you don't need to care about reducing head movements), you can move them to the source location when you are done.
Also, if creating the destination as new (rather than by blanking a copy of the source) create it with an initial size that will contain all the current data plus some months worth of growth - that will make the data copy a little faster again as it won't be allocating new space every now and again throughout the process.
This might be better than using shrink because migrating the data to a fresh database replicates the intended action of the shrink operation, but potentially with far less fragmentation (which is the unintended consequence of a reorganise-and-shrink). A shrink simply takes blocks from near the end of the file and puts them in the first space nearer the beginning making no effort to keep related data together.
I suspect the result will be more efficient space-wise too as there is likely to be less part-used pages afterwards. A shrink will just move part-used pages around, moving the data is more likely to result in full pages especially if you insert into the destination in the order of a table's clustered key/index (where a table has one) and create other indexes after the data has all migrated.
Of course if you can't take the apps offline at all, just performing a shrink is your only option so if you really need to reclaim the space go with that. Depending on your data, access patterns, common working set size, how much RAM the server has, and so forth, the extra internal fragmentation may not be all that significant in the end.
For the copy operation, either SSIS or base T-SQL would work just as well (the SSIS option might be less efficient, but potentially easier to maintain later). If you create the FK relationships at the end along with the indexes you can do a simple "for each table, copy" in either case. Of course for a one-off, a shrink+reorganise is probably fine too but I just like to scare people into never considering regular shrinks! (I've known people schedule them daily).
If you are running out of space, and your data isn't supposed to get that big then shrink, but rebuild your indices after with appropriate fill factors that allows for typical growth.
If your end goal is actually to reduce backup size make sure you implement a comprehensive backup strategy to clear out the transaction log and when you back up the db, use the compress options.
- Create a Full Database Backup (SQL Server)
- Transaction Log Backups (SQL Server)
I wouldn't recommend auto growth of 5GB unless you typically will expect to grow 5GB frequently. You could have intermittent performance problems otherwise. Your data size should first be set to what you think is required for, say, a year, and Auto Growth should be set to a size that you've tested doesn't affect operating performance. See Don't Touch that Shrink Database Button In SQL Server! by Mike Walsh.
Rebuilding indexes before shrinking causes the indexes to be badly laid out. It's not good to rebuild then shrink. Shrinking causes the indexes to be mangled to recover space - so rebuilding beforehand then shrinking is pointless. See When to use Auto Shrink by Thomas LaRock.