Multiple Docker containers, same image, different config

Each container runs with the same RO image but with a RW container specific filesystem layer. The result is each container can have it's own files that are distinct from every other container.

You can pass in configuration on the CLI, as an environment variable, or as a unique volume mount. It's a very standard use case for Docker.


Just run from the same image as many times as needed. New containers will be created and they can then be started and stoped each one saving its own configuration. For your convenience would be better to give each of your containers a name with "--name".

F.i:

docker run --name MyContainer1 <same image id>
docker run --name MyContainer2 <same image id>
docker run --name MyContainer3 <same image id>

That's it.

$ docker ps
CONTAINER ID        IMAGE            CREATED          STATUS               NAMES
a7e789711e62        67759a80360c   12 hours ago     Up 2 minutes         MyContainer1
87ae9c5c3f84        67759a80360c   12 hours ago     Up About a minute    MyContainer2
c1524520d864        67759a80360c   12 hours ago     Up About a minute    MyContainer3

After that you have your containers created forever and you can start and stop them like VMs.

docker start MyContainer1

I think looking at examples which are easy to understand could give you the best picture.

What you want to do is perfectly valid, an image should be anything you need to run, without the configuration.

To generate the configuration, you either:


a) volume mounts

use volumes and mount the file during container start docker run -v my.ini:/etc/mysql/my.ini percona (and similar with docker-compose). Be aware, you can repeat this as often as you like, so mount several configs into your container (so the runtime-version of the image). You will create those configs on the host before running the container and need to ship those files with the container, which is the downside of this approach (portability)

b) entry-point based configuration (generation)

Most of the advanced docker images do provide a complex so called entry-point which consumes ENV variables you pass when starting the image, to create the configuration(s) for you, like https://github.com/docker-library/percona/blob/master/5.7/docker-entrypoint.sh

so when you run this image, you can do docker run -e MYSQL_DATABASE=myapp percona and this will start percona and create the database percona for you. This is all done by

  1. adding the entry-point script here https://github.com/docker-library/percona/blob/master/5.7/Dockerfile#L65
  2. do not forget to copy the script during image build https://github.com/docker-library/percona/blob/master/5.7/Dockerfile#L63
  3. Then during the image-startup, your ENV variable will cause this to trigger: https://github.com/docker-library/percona/blob/master/5.7/docker-entrypoint.sh#L91

Of course, you can do whatever you like with this. E.g this configures a general portus image: https://github.com/EugenMayer/docker-rancher-extra-catalogs/blob/master/templates/registry-slim/11/docker-compose.yml which has this entrypoint https://github.com/EugenMayer/docker-image-portus/blob/master/build/startup.sh

So you see, the entry-point strategy is very common and very powerful and i would suppose to go this route whenever you can.

c) Derived images

Maybe for "completeness", the image-derive strategy, so you have you base image called "myapp" and for the installation X you create a new image

from myapp
COPY my.ini /etc/mysql/my.ini
COPY application.yml /var/app/config/application.yml

And call this image myapp:x - the obvious issue with this is, you end up having a lot of images, on the other side, compared to a) its much more portable.

Hope that helps