Kubernetes assign pods to pool

You can also use taints and tolerations. That way, you don't have to know/hardcode the specific pool name, but simply that it will have the taint high-cpu, for example. Then you give your pods a tolerance for that taint, and they can schedule on that target pool.

That allows you to have multiple pools, or to have HA pool deployment, where you can migrate from one pool to another by changing the taints on the pools.

The gotcha here, however, is that while a toleration allows pods to schedule on a tainted pool, it won't prevent them from scheduling elsewhere. So, you've need to taint pool-a with taint-a, and pool-b with taint-b, and give pods for pool-a and pool-b the proper taints to keep them out of eachother's pools.


Or you do both!

  • use labels to select which pool to run on
  • use taints and tolerations to ensure that only other pods don't try to run on this node-pool

That means you don't need to taint-n-tolerate on every pool (eg if you have a 'default pool' where you want things to run by default (ie if uusers do nothing special to their pods, they will deploy here) and "other pools" for more special/restricted use cases.

This model allows pods to run without any special tweaks to the config rather than tain-n-tolerate everything which means pods never run if configured without tolerations.

Depends on your/your user needs, how rigidly locked down you need everything, etc.

As always, there's more than one way to peel the dermis off a feline.


Ok, i found out a solution:

gcloud creates a label for the pool name. In my manifest i just dropped that under the node selector. Very easy.

Here comes my manifest.yaml: i deploy ipyparallel with kubernetes

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: ipengine
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: ipengine
    spec:
      containers:
      - name: ipengine
        image: <imageaddr.>
        args:
        - ipengine
        - --ipython-dir=/tmp/config/
        - --location=ipcontroller.default.svc.cluster.local
        - --log-level=0
        resources:
          requests:
            cpu: 1
            #memory: 3Gi
      nodeSelector:
        #<labelname>:value
        cloud.google.com/gke-nodepool: pool-highcpu32