Is there a way to add arbitrary records to kube-dns?

For the record, an alternate solution for those not checking the referenced github issue.

You can define an "external" Service in Kubernetes, by not specifying any selector or ClusterIP. You have to also define a corresponding Endpoint pointing to your external IP.

From the Kubernetes documentation:

{
    "kind": "Service",
    "apiVersion": "v1",
    "metadata": {
        "name": "my-service"
    },
    "spec": {
        "ports": [
            {
                "protocol": "TCP",
                "port": 80,
                "targetPort": 9376
            }
        ]
    }
}
{
    "kind": "Endpoints",
    "apiVersion": "v1",
    "metadata": {
        "name": "my-service"
    },
    "subsets": [
        {
            "addresses": [
                { "ip": "1.2.3.4" }
            ],
            "ports": [
                { "port": 9376 }
            ]
        }
    ]
}

With this, you can point your app inside the containers to my-service:9376 and the traffic should be forwarded to 1.2.3.4:9376

Limitations:

  • The DNS name used needs to be only letters, numbers or dashes. You can't use multi-level names (something.like.this). This means you probably have to modify your app to point just to your-service, and not yourservice.domain.tld.
  • You can only point to a specific IP, not a DNS name. For that, you can define a kind of a DNS alias with an ExternalName type Service.

There are 2 possible solutions for this problem now:

  1. Pod-wise (Adding the changes to every pod needed to resolve these domains)
  2. cluster-wise (Adding the changes to a central place which all pods have access to, Which is in our case is the DNS)

Let's begin with the pod-wise solution:

As of Kunbernetes 1.7, It's possible now to add entries to a Pod's /etc/hosts directly using .spec.hostAliases

For example: to resolve foo.local, bar.local to 127.0.0.1 and foo.remote, bar.remote to 10.1.2.3, you can configure HostAliases for a Pod under .spec.hostAliases:

apiVersion: v1
kind: Pod
metadata:
  name: hostaliases-pod
spec:
  restartPolicy: Never
  hostAliases:
  - ip: "127.0.0.1"
    hostnames:
    - "foo.local"
    - "bar.local"
  - ip: "10.1.2.3"
    hostnames:
    - "foo.remote"
    - "bar.remote"
  containers:
  - name: cat-hosts
    image: busybox
    command:
    - cat
    args:
    - "/etc/hosts"

The Cluster-wise solution:

As of Kubernetes v1.12, CoreDNS is the recommended DNS Server, replacing kube-dns. If your cluster originally used kube-dns, you may still have kube-dns deployed rather than CoreDNS. I'm going to assume that you're using CoreDNS as your K8S DNS.

In CoreDNS it's possible to Add an arbitrary entries inside the cluster domain and that way all pods will resolve this entries directly from the DNS without the need to change each and every /etc/hosts file in every pod.

First:

Let's change the coreos ConfigMap and add required changes:

kubectl edit cm coredns -n kube-system 

apiVersion: v1
kind: ConfigMap
data:
  Corefile: |
    .:53 {
        errors
        health {
          lameduck 5s
        }
        hosts /etc/coredns/customdomains.db example.org {
          fallthrough
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . "/etc/resolv.conf"
        cache 30
        loop
        reload
        loadbalance
    }
  customdomains.db: |
    10.10.1.1 mongo-en-1.example.org
    10.10.1.2 mongo-en-2.example.org
    10.10.1.3 mongo-en-3.example.org
    10.10.1.4 mongo-en-4.example.org

Basically we added two things:

  1. The hosts plugin before the kubernetes plugin and used the fallthrough option of the hosts plugin to satisfy our case.

    To shed some more lights on the fallthrough option. Any given backend is usually the final word for its zone - it either returns a result, or it returns NXDOMAIN for the query. However, occasionally this is not the desired behavior, so some of the plugin support a fallthrough option. When fallthrough is enabled, instead of returning NXDOMAIN when a record is not found, the plugin will pass the request down the chain. A backend further down the chain then has the opportunity to handle the request and that backend in our case is kubernetes.

  2. We added a new file to the ConfigMap (customdomains.db) and added our custom domains (mongo-en-*.example.org) in there.

Last thing is to Remember to add the customdomains.db file to the config-volume for the CoreDNS pod template:

kubectl edit -n kube-system deployment coredns
volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
            - key: customdomains.db
              path: customdomains.db

and finally to make kubernetes reload CoreDNS (each pod running):

$ kubectl rollout restart -n kube-system deployment/coredns

@OxMH answer is fantastic, and can be simplified for brevity. CoreDNS allows you to specify hosts directly in the hosts plugin (https://coredns.io/plugins/hosts/#examples).

The ConfigMap can therefore be edited like so:

$ kubectl edit cm coredns -n kube-system 


apiVersion: v1
kind: ConfigMap
data:
  Corefile: |
    .:53 {
        errors
        health {
          lameduck 5s
        }
        hosts {
          10.10.1.1 mongo-en-1.example.org
          10.10.1.2 mongo-en-2.example.org
          10.10.1.3 mongo-en-3.example.org
          10.10.1.4 mongo-en-4.example.org
          fallthrough
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . "/etc/resolv.conf"
        cache 30
        loop
        reload
        loadbalance
    }

You will still need to restart coredns so it rereads the config:

$ kubectl rollout restart -n kube-system deployment/coredns

Inlining the contents of the hostsfile removes the need to map the hostsfile from the configmap. Both approaches achieve the same outcome, it is up to personal preference as to where you want to define the hosts.


A type of External Name is required to access hosts or ips outside of the kubernetes.

The following worked for me.

{
    "kind": "Service",
    "apiVersion": "v1",
    "metadata": {
        "name": "tiny-server-5",
        "namespace": "default"
    },
    "spec": {
        "type": "ExternalName",
        "externalName": "192.168.1.15",
        "ports": [{ "port": 80 }]
    }
}