How to reduce CPU limits of kubernetes system resources?

Changing the default Namespace's LimitRange spec.limits.defaultRequest.cpu should be a legitimate solution for changing the default for new Pods. Note that LimitRange objects are namespaced, so if you use extra Namespaces you probably want to think about what a sane default is for them.

As you point out, this will not affect existing objects or objects in the kube-system Namespace.

The objects in the kube-system Namespace were mostly sized empirically - based on observed values. Changing those might have detrimental effects, but maybe not if your cluster is very small.

We have an open issue (https://github.com/kubernetes/kubernetes/issues/13048) to adjust the kube-system requests based on total cluster size, but that is not is not implemented yet. We have another open issue (https://github.com/kubernetes/kubernetes/issues/13695) to perhaps use a lower QoS for some kube-system resources, but again - not implemented yet.

Of these, I think that #13048 is the right way to implement what you 're asking for. For now, the answer to "is there a better way" is sadly "no". We chose defaults for medium sized clusters - for very small clusters you probably need to do what you are doing.


As stated by @Tim Hockin, The default configurations of add-ons are appropriate for typical clusters. But can be fine-tuned by changing the resource limit specification.

Before working add-on resizing, remember you can also disable unecessary add-ons for your use. This can vary a little depending on the add-on, its version, the kubernetes version, and by provider. Google has a page covering some options, the same concepts can be used in other providers too

As of the solution to the issue linked in @Tim Hockin answer, the first accepted way to do it is by using addon-resizer. It basically find out the best limits and requirements, patches the Deployment/Pod/DaemonSet and recreates the associated pods to match the new limits, but with less effort than manually all of it.

However, another more robust way to achieve that is by using Vertical Pod Autoscaler as stated by @Tim Smart answer. VPA accomplishes what addon-resizer does but it has many benefits:

  • VPA is a custom resource definition of a addon itself, allowing your code to be much more compact than using addon-resizer.
  • By being a custom resource definition it is also much easier to keep the implementation up to date.
  • some providers (as google) run VPA resources on control-plane processes, instead of deployments on your worker nodes. Making that, even if addon-resizer is simplier, VPA uses none of your resources while addon-resizer would.

An updated template would be:

apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
  name: <addon-name>-vpa
  namespace: kube-system
spec:
  targetRef:
    apiVersion: "apps/v1"
    kind:       <addon-kind (Deployment/DaemonSet/Pod)>
    name:       <addon-name>
  updatePolicy:
    updateMode: "Auto"

It is important to check the addons being used in your current cluster, as they can vary a lot by providers (AWS, Google, etc) and its kubernetes implementation versions

Make sure you have VPA addon installed in your cluster (most kubernetes services has it as an easy option to check)

Update policy can be Initial (only applies new limits when new pods are created), Recreate (forces pods out of spec to die and applies to new pods), Off (create recommendations but don´t apply), or Auto (currently matches Recreate, can change in the future)

The only differences on @Tim Smart answer example are that the current api version is autoscaling.k8s.io/v1, the current api version of targets is apps/v1, and that newer versions of some providers use FluentBit in place of Fluentd. His answer might be better suited for earlier kubernetes versions

If you are using Google Kubernetes Engine for example currently some of the "heaviest" requirement addons are:

  • fluentbit-gke (DaemonSet)
  • gke-metadata-server (DaemonSet)
  • kube-proxy (DaemonSet)
  • kube-dns (Deployment)
  • stackdriver-metadata-agent-cluster-level (Deployment)

By applying VPAs on it, my addon resource requirements dropped from 1.6 to 0.4.


I have found one of the best ways to reduce the system resource requests on a GKE cluster, is to use a vertical autoscaler.

Here are the VPA definitions I have used:

apiVersion: autoscaling.k8s.io/v1beta2
kind: VerticalPodAutoscaler
metadata:
  namespace: kube-system
  name: kube-dns-vpa
spec:
  targetRef:
    apiVersion: "extensions/v1beta1"
    kind: Deployment
    name: kube-dns
  updatePolicy:
    updateMode: "Auto"

---

apiVersion: autoscaling.k8s.io/v1beta2
kind: VerticalPodAutoscaler
metadata:
  namespace: kube-system
  name: heapster-vpa
spec:
  targetRef:
    apiVersion: "extensions/v1beta1"
    kind: Deployment
    name: heapster-v1.6.0-beta.1
  updatePolicy:
    updateMode: "Initial"

---

apiVersion: autoscaling.k8s.io/v1beta2
kind: VerticalPodAutoscaler
metadata:
  namespace: kube-system
  name: metadata-agent-vpa
spec:
  targetRef:
    apiVersion: "extensions/v1beta1"
    kind: DaemonSet
    name: metadata-agent
  updatePolicy:
    updateMode: "Initial"

---

apiVersion: autoscaling.k8s.io/v1beta2
kind: VerticalPodAutoscaler
metadata:
  namespace: kube-system
  name: metrics-server-vpa
spec:
  targetRef:
    apiVersion: "extensions/v1beta1"
    kind: Deployment
    name: metrics-server-v0.3.1
  updatePolicy:
    updateMode: "Initial"

---

apiVersion: autoscaling.k8s.io/v1beta2
kind: VerticalPodAutoscaler
metadata:
  namespace: kube-system
  name: fluentd-vpa
spec:
  targetRef:
    apiVersion: "extensions/v1beta1"
    kind: DaemonSet
    name: fluentd-gcp-v3.1.1
  updatePolicy:
    updateMode: "Initial"

---

apiVersion: autoscaling.k8s.io/v1beta2
kind: VerticalPodAutoscaler
metadata:
  namespace: kube-system
  name: kube-proxy-vpa
spec:
  targetRef:
    apiVersion: "extensions/v1beta1"
    kind: DaemonSet
    name: kube-proxy
  updatePolicy:
    updateMode: "Initial"

Here is a screenshot of what it does to a kube-dns deployment.