Rebuild Docker container on file changes

You can run build for a specific service by running docker-compose up --build <service name> where the service name must match how did you call it in your docker-compose file.

Example Let's assume that your docker-compose file contains many services (.net app - database - let's encrypt... etc) and you want to update only the .net app which named as application in docker-compose file. You can then simply run docker-compose up --build application

Extra parameters In case you want to add extra parameters to your command such as -d for running in the background, the parameter must be before the service name: docker-compose up --build -d application


Video with visual explanation (from 2022)

Visual video explanation container vs images

Since I got a lot of positive feedback to my previously, first visual explanation, I decided to create another video for this question and answer since there are some things which can be visualized better in a graphical video. It visualizes and also updates this answers with the knowledge and experience which I got in the last years using Docker on multiple systems (and also K8s).

While this question was asked in the context of ASP.NET Core, it is not really related to this framework. The problem was a lack of basic understanding of Docker concepts, so it can happen with nearly every application and framework. For that reason, I used a simple Nginx webserver here since I think many of you are familiar with web servers, but not everyone knows how specific frameworks like ASP.NET Core works.

The underlying problem is to understand the difference of containers vs images and how they are different in their lifecycle, which is the basic topic of this video.

Textual answer (Originally from 2016)

After some research and testing, I found that I had some misunderstandings about the lifetime of Docker containers. Simply restarting a container doesn't make Docker use a new image, when the image was rebuilt in the meantime. Instead, Docker is fetching the image only before creating the container. So the state after running a container is persistent.

Why removing is required

Therefore, rebuilding and restarting isn't enough. I thought containers works like a service: Stopping the service, do your changes, restart it and they would apply. That was my biggest mistake.

Because containers are permanent, you have to remove them using docker rm <ContainerName> first. After a container is removed, you can't simply start it by docker start. This has to be done using docker run, which itself uses the latest image for creating a new container-instance.

Containers should be as independent as possible

With this knowledge, it's comprehensible why storing data in containers is qualified as bad practice and Docker recommends data volumes/mounting host directorys instead: Since a container has to be destroyed to update applications, the stored data inside would be lost too. This cause extra work to shutdown services, backup data and so on.

So it's a smart solution to exclude those data completely from the container: We don't have to worry about our data, when its stored safely on the host and the container only holds the application itself.

Why -rf may not really help you

The docker run command, has a Clean up switch called -rf. It will stop the behavior of keeping docker containers permanently. Using -rf, Docker will destroy the container after it has been exited. But this switch has a problem: Docker also remove the volumes without a name associated with the container, which may kill your data

While the -rf switch is a good option to save work during development for quick tests, it's less suitable in production. Especially because of the missing option to run a container in the background, which would mostly be required.

How to remove a container

We can bypass those limitations by simply removing the container:

docker rm --force <ContainerName>

The --force (or -f) switch which use SIGKILL on running containers. Instead, you could also stop the container before:

docker stop <ContainerName>
docker rm <ContainerName>

Both are equal. docker stop is also using SIGTERM. But using --force switch will shorten your script, especially when using CI servers: docker stop throws an error if the container is not running. This would cause Jenkins and many other CI servers to consider the build wrongly as failed. To fix this, you have to check first if the container is running as I did in the question (see containerRunning variable).

There is a better way (Added 2016)

While plain docker commands like docker build, docker run and others are a good way for beginners to understand basic concepts, it's getting annoying when you're already familiar with Docker and want to get productive. A better way is to use Docker-Compose. While it's designed for multi-container environments, it also gives you benefits when using standalone with a single container. Altough multi-container environments aren't really uncommon. Nearly every application has at least an application server and some database. Some even more like caching servers, cron containers or other things.

version: "2.4"
services:
  my-container:
    build: .
    ports:
      - "5000:5000"

Now you can just use docker-compose up --build and compose will take care of all the steps which I did manually. I'd prefer this one over the script with plain docker commands, which I added as answer from 2016. It still works, but is more complex and it will handle certain situations not as good as docker-compose would. For example, compose checks if everything is up2date and only rebuild those things, who need to be rebuild because of changes.

Especially when you're using multiple containers, compose offers way more benefits. For example, linking the containers which requires to create/maintain networks manually otherwise. You can also specify dependencies, so that a database container is started before the application server, which depends on the DB at startup.

In the past with Docker-Compose 1.x I noticed some issues, especially with caching. This results in containers not being updated, even when something has changed. I have tested compose v2 for some time now without seeing any of those issues again, so it seems to be fixed now.

Full script for rebuilding a Docker container (original answer vom 2016)

According to this new knowledge, I fixed my script in the following way:

#!/bin/bash
imageName=xx:my-image
containerName=my-container

docker build -t $imageName -f Dockerfile  .

echo Delete old container...
docker rm -f $containerName

echo Run new container...
docker run -d -p 5000:5000 --name $containerName $imageName

This works perfectly :)


Whenever changes are made in dockerfile or compose or requirements, re-run it using docker-compose up --build. So that images get rebuilt and refreshed