Access docker container from host using containers name
There is a opensource application that solves this issue, it's called DNS Proxy Server, here some examples from official repository
It's a DNS server that solves containers hostnames, if could not found a hostname that matches then solve it from internet as well
Start DNS Server
$ docker run --hostname dns.mageddo --restart=unless-stopped -p 5380:5380 \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /etc/resolv.conf:/etc/resolv.conf \
defreitas/dns-proxy-server
It will be set automatically as your default DNS (and recover to the original when it stops)
Creating some containers for test
checking docker-compose file
$ cat docker-compose.yml
version: '3'
services:
nginx-1:
image: nginx
hostname: nginx-1.docker
network_mode: bridge
linux-1:
image: alpine
hostname: linux-1.docker
command: sh -c 'apk add --update bind-tools && tail -f /dev/null'
network_mode: bridge # that way he can solve others containers names even inside, solve nginx-2, for example
starting containers
$ docker-compose up
Solving containers
from host
nslookup nginx-1.docker
Server: 13.0.0.5
Address: 13.0.0.5#53
Non-authoritative answer:
Name: nginx-1.docker
Address: 13.0.0.6
from another container
$ docker-compose exec linux-1 ping nginx-1.docker
PING nginx-1.docker (13.0.0.6): 56 data bytes
64 bytes from 13.0.0.6: seq=0 ttl=64 time=0.034 ms
As well it solves internet hostnames
$ nslookup google.com
Server: 13.0.0.5
Address: 13.0.0.5#53
Non-authoritative answer:
Name: google.com
Address: 216.58.202.78
If you're only using you docker-compose setup locally you could map the ports from your containers to your host with
elasticsearch:
image: elasticsearch:2.2
ports:
- 9300:9300
- 9200:9200
Then use localhost:9300 (or 9200 depending on protocol) from your web-app to access Elasticsearch.
A more complex solution is to run your own dns that resolve container names. I think that this solution is a lot closer to what you're asking for. I have previsously used skydns when running kubernetes locally.
There are a few options out there. Have a look at https://github.com/gliderlabs/registrator and https://github.com/jderusse/docker-dns-gen. I didn't try it, but you could potentially map the dns port to your host in the same way as with the elastic ports in the previous example and then add localhost to your resolv.conf to be able to resolve your container names from your host.
There are two solutions (except /etc/hosts
) described here and here
I wrote my own solution in Python and implemented it as service to provide mapping from container hostname to its IP. Here it is: https://github.com/nicolai-budico/dockerhosts
It launches dnsmasq with parameter --hostsdir=/var/run/docker-hosts
and updates file /var/run/docker-hosts/hosts
each time a list of running containers was changed.
Once file /var/run/docker-hosts/hosts
is changed, dnsmasq automatically updates its mapping and container become available by hostname in a second.
$ docker run -d --hostname=myapp.local.com --rm -it ubuntu:17.10
9af0b6a89feee747151007214b4e24b8ec7c9b2858badff6d584110bed45b740
$ nslookup myapp.local.com
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
Name: myapp.local.com
Address: 172.17.0.2
There are install and uninstall scripts. Only you need is to allow your system to interact with this dnsmasq instance. I registered in in systemd-resolved:
$ cat /etc/systemd/resolved.conf
[Resolve]
DNS=127.0.0.54
#FallbackDNS=
#Domains=
#LLMNR=yes
#MulticastDNS=yes
#DNSSEC=no
#Cache=yes
#DNSStubListener=udp
I'm using a bash script to update /etc/hosts
. Why this solution?
- Short script, easy to review (didn't want to give some un-reviewed application with lots of dependencies access to the Docker socket (which means root access))
- It uses
docker events
to run every time a container is started or stopped (other solutions posted here run every second in a loop, which is way less efficient) - Updates
/etc/hosts
, no separate DNS server needed. - Only dependencies are
bash
,mktemp
,grep
,xargs
,sed
,jq
anddocker
, all of which I had already installed.
Just put the script somewhere, e.g. /usr/local/bin/docker-update-hosts
:
#!/usr/bin/env bash
set -e -u -o pipefail
hosts_file=/etc/hosts
begin_block="# BEGIN DOCKER CONTAINERS"
end_block="# END DOCKER CONTAINERS"
if ! grep -Fxq "$begin_block" "$hosts_file"; then
echo -e "\n${begin_block}\n${end_block}\n" >> "$hosts_file"
fi
(echo "| container start |" && docker events) | \
while read event; do
if [[ "$event" == *" container start "* ]] || [[ "$event" == *" network disconnect "* ]]; then
hosts_file_tmp="$(mktemp)"
docker container ls -q | xargs -r docker container inspect | \
jq -r '.[]|"\(.NetworkSettings.Networks[].IPAddress|select(length > 0) // "# no ip address:") \(.Name|sub("^/"; "")|sub("_1$"; ""))"' | \
sed -ne "/^${begin_block}$/ {p; r /dev/stdin" -e ":a; n; /^${end_block}$/ {p; b}; ba}; p" "$hosts_file" \
> "$hosts_file_tmp"
chmod 644 "$hosts_file_tmp"
mv "$hosts_file_tmp" "$hosts_file"
fi
done
Note: The script removes the _1
suffix added by docker-compose from container names. If you don't want that just remove |sub("_1$"; "")
from the script.
You can use a systemd service to run this synchronously with Docker: /etc/systemd/system/docker-update-hosts.service
:
[Unit]
Description=Update Docker containers in /etc/hosts
Requires=docker.service
After=docker.service
PartOf=docker.service
[Service]
ExecStart=/usr/local/bin/docker-update-hosts
[Install]
WantedBy=docker.service
To activate, run:
systemctl daemon-reload
systemctl enable docker-update-hosts.service
systemctl start docker-update-hosts.service