Managing DB migrations on Kubernetes cluster
From an automation/orchestration perspective, my sense is that problems like this are intended to be solved with Operators, using the recently released Operator Framework:
https://github.com/operator-framework
The idea is that there would be a Postgres Migrations Operator- which to my knowledge doesn't exist as yet- which would lie idle waiting for a Custom Resource Definition describing the migration to be posted to the cluster/namespace.
The Operator would wake up, understand what's involved in the intended migration, do some analysis on the cluster to construct a migration plan, and then perform the steps as you describe-
- put the application into some kind of user-visible maintenance mode
- take down the existing pods
- run the migration
- verify
- recreate the application pods
- test
- take the application out of maintenance mode
That doesn't help you now, though.
Ideal solution would be to stop all pods, run the migration and recreate them. But I am not sure how to achieve it properly with Kubernetes.
I see from one of the comments that you use Helm, so I'd like to propose a solution leveraging Helm's hooks:
Helm provides a hook mechanism to allow chart developers to intervene at certain points in a release's life cycle. For example, you can use hooks to:
Load a ConfigMap or Secret during install before any other charts are loaded.
Execute a Job to back up a database before installing a new chart, and then execute a second job after the upgrade in order to restore data.
Run a Job before deleting a release to gracefully take a service out of rotation before removing it.
https://helm.sh/docs/topics/charts_hooks/
You could package your migration as a k8s Job
and leverage the pre-install
or pre-upgrade
hook to run the job. These hooks runs after templates are rendered, but before any new resources are created in Kubernetes. Thus, your migrations will run before your Pods are deployed.
To delete the deployments prior to running your migrations, create a second pre-install/pre-upgrade hook with a lower helm.sh/hook-weight
that deletes the target deployments:
apiVersion: batch/v1
kind: Job
metadata:
name: "pre-upgrade-hook1"
annotations:
"helm.sh/hook": pre-upgrade
"helm.sh/hook-weight": "-1"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: "pre-upgrade-hook1"
spec:
restartPolicy: Never
serviceAccountName: "<an SA with delete RBAC permissions>"
containers:
- name: kubectl
image: "lachlanevenson/k8s-kubectl:latest"
command: ["delete","deployment","deploy1","deploy2"]
The lower hook-weight will ensure this job runs prior to the migration job. This will ensure the following series of events:
- You run
helm upgrade
- The helm hook with the lowest hook-weight runs and deletes the relevant deployments
- The second hook runs and runs your migrations
- Your Chart will install with new Deployments, Pods, etc.
Just make sure to keep all of the relevant Deployments in the same Chart.