Add host mapping to /etc/hosts in Kubernetes

Create a file on the host system(or a secret) with all the extra hosts you need (e.g. /tmp/extra-hosts)

Then in K8S manifest:

spec:
  containers:
    - name: haproxy
      image: haproxy
      lifecycle:
        postStart:
          exec:
            command: ["/bin/sh", "-c", "cat /hosts >> /etc/hosts"]

      volumeMounts:
        - name: haproxy-hosts
          mountPath: /hosts

      volumes:
        - name: haproxy-hosts
          hostPath:
            path: /tmp/extra-hosts

From kubernetes.io/docs: "In addition to the default boilerplate, we can add additional entries to the hosts file to resolve foo.local, bar.local to 127.0.0.1 and foo.remote, bar.remote to 10.1.2.3, we can by adding HostAliases to the Pod under .spec.hostAliases:"

Also you can "Configure stub-domain and upstream DNS servers".


To add a hostname to the hosts file in a "semi" dynamic fashion, one can use the postStart hook:

spec:
  containers:
  - name: somecontainer
    image: someimage
    lifecycle:
      postStart:
        exec:
          command:
            - "cat"
            - "someip"
            - "somedomain"
            - ">"
            - "/etc/hosts"

A better way would be however to use an abstract name representing the service in multiple stages. For example instead of using database01.production.company.com use database01 and setup the environment such that this resolves to production in the production setting and staging in the staging setting.

Lastly it is also possible to edit the kubedns settings such that the kubernetes DNS can be used to retrieve external DNS names. Then you would just use whatever name you need in the code, and it just "automagically" works. See for example https://github.com/kubernetes/kubernetes/issues/23474 on how to set this up (varies a bit from version to version of skydns: Some older ones really do not work with this, so upgrade to at least kube 1.3 to make this work properly)