AWS EKS 0/1 nodes are available. 1 insufficient pods
The issue is that you are using t2.micro
. At the minimum t2.small
is required. Scheduler is not able to schedule pod on the node because not enough capacity is available on the t2.micro
instance. Most of the capacity is already taken by the system resources. Use t2.small
at the minimum.
On Amazon Elastic Kubernetes Service (EKS
), the maximum number of pods per node depends on the node type and ranges from 4 to 737.
If you reach the max limit, you will see something like:
❯ kubectl get node -o yaml | grep pods
pods: "17" => this is allocatable pods that can be allocated in node
pods: "17" => this is how many running pods you have created
If you get only one number, it should be allocatable. Another way to count all running pods is to run the following command:
kubectl get pods --all-namespaces | grep Running | wc -l
Here's the list of max pods per node type: https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt
On Google Kubernetes Engine (GKE
), the limit is 110 pods per node. check the following URL:
https://github.com/kubernetes/community/blob/master/sig-scalability/configs-and-limits/thresholds.md
On Azure Kubernetes Service (AKS
), the default limit is 30 pods per node but it can be increased up to 250. The default maximum number of pods per node varies between kubenet and Azure CNI networking, and the method of cluster deployment. check the following URL for more information:
https://docs.microsoft.com/en-us/azure/aks/configure-azure-cni#maximum-pods-per-node