Building a compiled application with Docker
While the solutions presented in the other answers -- and in particular the suggestion of Misha Brukman in the comments to this answer about using one Dockerfile for development and one for production -- would be considered idiomatic at the time the question was written, it should be noted that the problems they are trying to solve -- and in particular the issue of cleaning up the build environment to reduce image size while still being able to use the same container environment in development and production -- have effectively been solved by multi-stage builds, which were introduced in Docker 17.05.
The idea here would be to split up the Dockerfile into two parts, one that's based on your favorite development environment, such as a fully-fledged Debian base image, which is concerned with creating the binaries that you want to deploy at the end of the day, and another which simply runs the built binaries in a minimal environment, such as Alpine.
This way you avoid possible discrepancies between development and production environments as alluded to by blueskin in one of the comments, while still ensuring that your production image is not polluted with development tooling.
The documentation provides the following example of a multi-stage build of a Go application, which you would then adopt to a C++ development environment (with one gotcha being that Alpine uses musl so you have to be careful when linking in your development environment).
FROM golang:1.7.3
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=0 /go/src/github.com/alexellis/href-counter/app .
CMD ["./app"]
Update
For anyone visiting this question after 2017, please see the answer by fuglede about using multi-stage Docker builds, that is really a better solution than my answer (below) from 2015, well before that was available.
Old answer
The way I would do it is to run your build outside of your container and only copy the output of the build (your binary and any necessary libraries) into your container. You can then upload your container to a container registry (e.g., use a hosted one or run your own), and then pull from that registry onto your production machines. Thus, the flow could look like this:
- build binary
- test / sanity-check the binary itself
- build container image with binary
- test / sanity-check the container image with the binary
- upload to container registry
- deploy to staging/test/qa, pulling from the registry
- deploy to prod, pulling from the registry
Since it's important that you test before production deployment, you want to test exactly the same thing that you will deploy in production, so you don't want to extract or modify the Docker image in any way after building it.
I would not run the build inside the container you plan to deploy in prod, as then your container will have all sorts of additional artifacts (such as temporary build outputs, tooling, etc.) that you don't need in production and needlessly grow your container image with things you won't use for your deployment.
My recommendation would be to completely develop, build and test on the container itself. This ensures the Docker philosophy that the developer's environment is the same as the production environment, see The Modern Developer Workstation on MacOS with Docker.
Especially, in case of C++ applications where there are usually dependencies with shared libraries/object files.
I don't think there exists a standardized development process for developing, testing and deploying C++ applications on Docker, yet.
To answer your question, the way we do it as of now is, to treat the container as your development environment and enforce a set of practices on the team like:
- Our codebase (except config files) always lives on shared volume (on local machine) (versioned on Git)
- Shared/dependent libraries, binaries, etc. always live in the container
- Build & test in the container and before committing the image, clean unwanted object files, libraries, etc., and ensure
docker diff
changes are as expected. - Changes/updates to environment, including shared libraries, dependencies, are always documented and communicated with the team.
I had difficulties automating our build with docker-compose
, and I ended up using docker build
for everything:
Three layers for building
Run → develop → build
Then I copy the build outputs into the 'deploy' image:
Run → deploy
Four layers to play with:
Run- Contains any packages required for the application to run - e.g. libsqlite3-0
FROM <projname>:run
- Contains packages required for the build
- e.g. g++, cmake, libsqlite3-dev
- Dockerfile executes any external builds
- e.g. steps to build boost-python3 (not in package manager repositories)
FROM <projname>:develop
- Contains source
- Dockerfile executes internal build (code that changes often)
- Built binaries are copied out of this image for use in deploy
FROM <projname>:run
- Output of build copied into image and installed
RUN
orENTRYPOINT
used to launch the application
The folder structure looks like this:
.
├── run
│ └── Dockerfile
├── develop
│ └── Dockerfile
├── build
│ ├── Dockerfile
│ └── removeOldImages.sh
└── deploy
├── Dockerfile
└── pushImage.sh
Setting up the build server means executing:
docker build -f run -t <projName>:run
docker build -f develop -t <projName>:develop
Each time we make a build, this happens:
# Execute the build
docker build -f build -t <projName>:build
# Install build outputs
docker build -f deploy -t <projName>:version
# If successful, push deploy image to dockerhub
docker tag <projName>:<version> <projName>:latest
docker push <projName>:<version>
docker push <projName>:latest
I refer people to the Dockerfiles as documentation about how to build/run/install the project.
If a build fails and the output is insufficient for investigation, I can run /bin/bash
in <projname>:build
and poke around to see what went wrong.
I put together a GitHub repository around this idea. It works well for C++, but you could probably use it for anything.
I haven't explored the feature, but @TaylorEdmiston pointed out that my pattern here is quite similar to multi-stage builds, which I didn't know about when I came up with this. It looks like a more elegant (and better documented) way to achieve the same thing.