Change column types in a huge table
Depending on what change you are making, sometimes it can be easier to take a maintenance window. During that window (where nobody should be able to change the data in the table) you can:
- drop any indexes/constraints pointing to the old column, and disable triggers
- add a new nullable column with the new data type (even if it is meant to be NOT NULL)
- update the new column setting it equal to the old column's value (and you can do this in chunks of individual transactions (say, affecting 10000 rows at a time using
UPDATE TOP (10000) ... SET newcol = oldcol WHERE newcol IS NULL
) and with CHECKPOINT to avoid overrunning your log) - once the updates are all done, drop the old column
- rename the new column (and add a NOT NULL constraint if appropriate)
- rebuild indexes and update statistics
The key here is that it allows you to perform the update incrementally in step 3, which you can't do in a single ALTER TABLE command.
This assumes the column is not playing a major role in data integrity - if it is involved in a bunch of foreign key relationships, there are more steps.
EDIT
Also, and just wondering out loud, I haven't done any testing for this (but adding it to the list). I wonder if page + row compression would help here? If you change an INT to a BIGINT, with compression in place SQL Server should still treat all values as if they still fit in an INT. Again, I haven't tested if this would make an alter faster or slower, or how much longer it would take to add compression in the first place. Just throwing it out there.