How do I check the max pod capacity of a Kubernetes node
Solution 1:
Inside Kubernetes docs regarding Building large clusters we can read that at v1.17 supports:
Kubernetes supports clusters with up to 5000 nodes. More pecifically, we support configurations that meet all of the following criteria:
- No more than 5000 nodes
- No more than 150000 total pods
- No more than 300000 total containers
- No more than 100 pods per node
Inside GKE a hard limit of pods per node is 110
because of available addresses.
With the default maximum of 110 Pods per node, Kubernetes assigns a /24 CIDR block (256 addresses) to each of the nodes. By having approximately twice as many available IP addresses as the number of pods that can be created on a node, Kubernetes is able to mitigate IP address reuse as Pods are added to and removed from a node.
This is described in Optimizing IP address allocation and Quotas and limits.
As or setting max pods for Rancher
here is a solution [Solved] Setting Max Pods.
There also is a discussion about Increase maximum pods per node
... using a single number (max pods) can be misleading for the users, given the huge variation in machine specs, workload, and environment. If we have a node benchmark, we can let users profile their nodes and decide what is the best configuration for them. The benchmark can exist as a node e2e test, or in the contrib repository.
I hope this provides a bit more insides to the limits.
Solution 2:
I found this is the best way
kubectl get nodes -A
NAME STATUS ROLES AGE VERSION
192.168.1.1 Ready controlplane,etcd,worker 9d v1.17.2
192.168.1.2 Ready controlplane,etcd,worker 9d v1.17.2
kubectl describe nodes 192.168.1.1 | grep -i pods
Capacity:
cpu: 16
ephemeral-storage: 55844040Ki
hugepages-2Mi: 0
memory: 98985412Ki
pods: 110
Allocatable:
cpu: 16
ephemeral-storage: 51465867179
hugepages-2Mi: 0
memory: 98883012Ki
pods: 110