Alter table on live production databases

When you issue an ALTER TABLE in PostgreSQL it will take an ACCESS EXCLUSIVE lock that blocks everything including SELECT. However, this lock can be quite brief if the table doesn't require re-writing, no new UNIQUE, CHECK or FOREIGN KEY constraints need expensive full-table scans to verify, etc.

If in doubt, you can generally just try it! All DDL in PostgreSQL is transactional, so it's quite fine to cancel an ALTER TABLE if it takes too long and starts holding up other queries. The lock levels required by various commands are documented in the locking page.

Some normally-slow operations can be sped up to be safe to perform without downtime. For example, if you have table t and you want to change column customercode integer NOT NULL to text because the customer has decided all customer codes must now begin with an X, you could write:

ALTER TABLE t ALTER COLUMN customercode TYPE text USING ( 'X'||customercode::text );

... but that would lock the whole table for the re-write. So does adding a column with a DEFAULT. It can be done in a couple of steps to avoid the long lock, but applications must be able to cope with the temporary duplication:

ALTER TABLE t ADD COLUMN customercode_new text;
BEGIN;
LOCK TABLE t IN EXCLUSIVE MODE;
UPDATE t SET customercode_new = 'X'||customercode::text;
ALTER TABLE t DROP COLUMN customercode;
ALTER TABLE t RENAME COLUMN customercode_new TO customercode;
COMMIT;

This will only prevent writes to t during the process; the lock name EXCLUSIVE is somewhat deceptive in that it excludes everything except SELECT; the ACCESS EXCLUSIVE mode is the only one that excludes absolutely everyting. See lock modes. There's a risk that this operation could deadlock-rollback due to the lock upgrade required by the ALTER TABLE, but at worst you'll just have to do it again.

You can even avoid that lock and do the whole thing live by creating a trigger function on t that whenever an INSERT or UPDATE comes in, automatically populates customercode_new from customercode.

There are also built-in tools like CREATE INDEX CONCURRENTLY and ALTER TABLE ... ADD table_constraint_using_index that're designed to allow DBAs to reduce exclusive locking durations by doing work more slowly in a concurrency-friendly way.

The pg_reorg tool or its successor pg_repack can be used for some table restructuring operations as well.


Percona has comes up with its own tool for performing online schema changes

The tool is called pt-online-schema-change

It involves triggers, so please read the documentation carefully.

According to the Documentation, the major operations done are

  • Sanity checks
  • Chunking
  • Online schema change
    • Create and alter temporary table
    • Capture changes from the table to the temporary table
    • Copy rows from the table to the temporary table
    • Synchronize the table and the temporary table
    • Swap/rename the table and the temporary table
    • Cleanup

Shutting the system down and doing all changes at once may be very risky. If something goes wrong, and frequently it does, there is no easy way back.

As an Agile developer, I sometimes need to refactor tables without any downtime at all, as those tables are being modified and read from.

The following approach has low risk, because the change is done in several low-risk steps that are very easy to roll back:

  • Make sure that all the modules accessing the table are well covered with automated tests.
  • Create a new table. Alter all procedures that modify the old table, so that they modify both old and new tables.
  • Migrate existing data into new structure. Do it in smallish batches, so that it does not seriously impact the overall performance on the server.
  • Verify that the migration of data succeeded.
  • Redirect some of the selecting procedures from the old table to the new ones. Use automated tests to make sure that the changed modules are still correct. Make sure their performance is acceptable. Deploy the altered procedures.
  • Repeat the previous step until all the reports use the new table.
  • Change the procedures that modify the tables, so that they only access the new table.
  • Archive the old table and remove it from the system.

We have used this approach many times to change large live production tables without downtime, with no issues at all.