How to integrate Capistrano with Docker for deployment?

As far as i understood, you are using capistrano on the host , to redeploy the whole application stack, means containers. So you are using capistrano to orchestrate building, container creation and thus deployment.

While you do so you basically, when running cap deploy

  • build the app ( based on the current base you pulled on the host ) - probably even includes gulp/grunt/build tasks
  • then you "package" it into your image using "volume mounts"
  • during that you start / replace the containers

You do so to get a 'nearly' zero downtime deployment.

If you really care about the downtime and about formalising your deployment process that much, you should do it right by using a proper pipeline implementation for

  • packaging / ci
  • deployment / distribution

I do not think capistrano can/should be one of the tools you can use during this strategy. Capistrano is meant for deployment of an application directly on a server using ssh and git as transport. Using cap to build whole images on the target server to then start those as containers, is really over the top, IMHO.

packaging / building

Either use a CI/CD server like jenkins/bamboo/gocd to build an release-image for you application. Assuming only the app is customised in terms of 'release', lets say you have db and app as containers/services, app will include your source-code and will regularly change during releases..

Thus its a CD/CI process to build a new app-image (release) offsite on your CI server. Pulling the source code of your application an packaging it into your image using COPY and then any RUN statement to compile your assets ( npm / gulp / grunt whatever ). That all happens not on the production server, but on the CI/CD agent. Using multistage builds for slim images is encouraged.

Then you push this release-image, lets call this image yourregistry.com/yourapp into your private registry as a new 'version' for deployment.

deployment

with downtime (easy)

To deploy into your production or staging server WITH downtime, you would simply do a docker-composer pull && docker-composer up - this will pull the newer image and then start it in your stack - your app is upgraded. Using tagged images in the release stage would require to change the the docker-compose.yml

The server should of course be able to pull from your private repository.

withou downtime (more effort)

Achieving a zero-downtime deployment you should use the blue-green deployment concept. Thus you add a proxy to your setup and do no longer expose the public port from the app, but rather using this proxy public port. Your current live system might be running on a random port 21231, the proxy is forwarding from 443 to 21231.

We are using random ports to avoid the conflict during deploying the "second" system, covering one of the issue you mentioned.

When redeploying, you will only start a "new" container based on the new app-image in addition (to the old one), it gets a new random port 12312 - if you like, run your integration tests agains 12312 directly ( do not use the proxy ). If you are done and happy, reconfigure the proxy to now forward to 12312 - then remove the old container (21231).

If you like to automate the proxy-reconfiguration, which in detail is out of scope for this question, you can use service-discovery and a registrator which makes random ports much more practical and makes it easy to reconfigure you proxy, let it be nginx/haproxy while they are running. Tools would be, for example.

  • consul
  • consul watch + consul-template or tiller on the proxy to update the proxy-config
  • Registator for centralized registration or consul agent client mode with a service-configuration.json (depends on you choice)
  • -

I don't think Capistrano is the right tool for the job. This was recently discussed in a PR for SSHKit, which underlies Capistrano.

https://github.com/capistrano/sshkit/pull/368

@EugenMayer does a better job of explaining a "normal" way of using Docker.