Docker as a sandbox for untrusted code
tl;dr: container solutions do not and never will do guarantee to provide complete isolation, use virtualization instead if you require this.
Bottom up and top down approaches
Docker (and the same applies to similar container solutions) does not guarantee complete isolation and should not be confused with virtualization. Isolation of containers is achieved through adding some barriers in-between them, but they still use shared resources as the kernel. Virtualization on the other hand has much smaller shared resources, which are easier to understand and well-tested by now, often enriched by hardware features to restrict access. Docker itself describes this in their Docker security article as
One primary risk with running Docker containers is that the default set of capabilities and mounts given to a container may provide incomplete isolation, either independently, or when used in combination with kernel vulnerabilities.
Consider virtualization as a top-down approach
For virtualization, you start with pretty much complete isolation and provide some well-guarded, well-described interfaces; this means you can be rather sure breaking out of a virtual machine is hard. The kernel is not shared, if you have some kernel exploit allowing you to escape from user restrictions, the hypervisor is till in-between you and other virtual machines.
This does not imply perfect isolation. Again and again, hypervisor issues are found, but most of them are very complicated attacks with limited scope that are hard to perform (but there are also very critical, "easy to exploit" ones.
Containers on the other hand are bottom-up
With containers, you start from running applications on the same kernel, but add up barriers (kernel namespaces, cgroups, ...) to better isolate them. While this provides some advantages as lower overhead, it is much more difficult to "be sure" not having forgotten anything, the Linux Kernel is a very large and complex piece of software. And the kernel itself is still shared, if there is an exploit in the kernel, chances are high you can escape to the host (and/or other containers).
Users inside and outside containers
Especially pre-Docker 1.9 which should get user namespaces pretty much means "container root also has host root privileges" as soon as another missing barrier in the Docker machine (or kernel exploit) is found. There have been such issues before, you should expect more to come and Docker recommends that you
take care of running your processes inside the containers as non-privileged users (i.e., non-root).
If you're interested in more details, estep posted a good article on http://integratedcode.us explaining user namespaces.
Restricting root access (for example, by enforcing a non-privileged user when creating the image or at least using the new user namespaces) is a necessary and basic security measure for providing isolation, and might give satisfying isolation in-between containers. Using restricted users and user namespaces, escaping to the host gets much harder, but still you shouldn't be sure there is just another way not considered yet to break out of a container (and if this includes exploiting an unpatched security issue in the kernel), and shouldn't be used to run untrusted code.
Whilst the answer from @jens-erat has the correct high-level point that virtualization provides superior isolation to containerization solutions like docker, it is not a black and white setup.
On the one hand there have been a number of guest --> host breakouts in virtualization technology (for example the "Venom" vulnerability in virtual floppy device drivers) so like any security control the isolation provided by virtualization is not 100%.
On the perspective of hardening your docker installation to improve the isolation and reduce the risk posed, there are a number of steps you can take to help secure your docker installation.
Docker has some good security guidance available on hardening. There's a (slightly out of date) CIS Security Guide and also docker bench which can be used to review configurations
Depending on how your application operates (i.e. how does the code get on there for compilation) you can modify the operation of docker to reduce the chances of malicious activity. For example, assuming that the code gets on there at the host level, you may be able to deny network access to the container (
--net none
switch ondocker run
). You can also look at whether you can drop additional capabilities to reduce what the process running in the container can do.Consider using AppArmor profiles to restrict resources. AppArmor can be used to restrict what can be done in the container and you can use tools like bane to generate profiles for your applications.
Also I would recommend implementing some monitoring at the host level to look for possibly malicious access. As you know what the containers should/should not be doing, having some relatively strict monitoring would alert you to any possible break-out
Another area that could be productive to harden this kind of setup is to use stripped down host OS's and container images. The less code exposed, the smaller the attack surface. Something like CoreOS or Ubuntu Snappy Core might be worth looking at