Score:0

kubernetes - enforce nodeaffinity/nodeselector

si flag

I'm trying to achieve this:

  • Anything in app namespace gets scheduled onto specific nodes
  • other namespaces does not have ability to schedule pod to specific nodes
  • developers should not have option to interfere with this

So, I should probably use PodNodeSelector and PodTolerationRestriction, however it requires api restart and few articles claim that it will be deprecated once NodeAffinity is good enough, and I do not feel skilled enough to use dynamic admission controller.

However, reading about NodeSelector/taints&tolerations, it seems that this has to be managed by developers in deployments and they could just ignore it.

Is there any other option I'm missing, or is there any way how to enforce NodeSelector/taints&toleration so developers cannot change it?

Thank you

in flag
Hi wwwnick welcome to S.F. It sounds like you have a people problem not a software problem, but that said if you really want to solve this in software, an [admission webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) such as [OPA gatekeeper](https://www.openpolicyagent.org/docs/v0.46.1/kubernetes-introduction/#what-is-opa-gatekeeper) will likely do what you want
Score:0
hk flag
SYN

You are looking for taints, and tolerations. In addition to a nodeSelector.

A taint is a way to mark a node. Such marks may be used to prevent workloads from being scheduled on your nodes. While tolerations may be defined on your workloads, to allow for exceptions.

You could taint your node with something like:

kubectl taint nodes worker1.example.com workload=reserved:NoSchedule
kubectl taint nodes workerN.example.com workload=reserved:NoSchedule

And have your special applications set the following, in their pod spec, which would bypass that taint:

spec:
  containers:
  - [...]
  tolerations:
  - key: workload
    operator: Equal
    value: reserved
    effect: NoSchedule

And while this would ensure that no pod would get scheduled over there unless they have that taint: you would probably want to add some label to your nodes, and use a nodeSelector, ensuring your special pods may only start on your reserved nodes.

You would add a label with:

kubectl label node worker1.example.com workload=reserved
kubectl label node workerN.example.com workload=reserved

And add a nodeSelector to your pod definition:

spec:
  containers:
  - [...]
  nodeSelector:
    workload: reserved
wwwnick avatar
si flag
yes, unfortunately that is something what developers have to manage in their deployments. What i need is something that they do not need to set and are basically denied to change. So I guess there is no other option than podnodeselector or gatekeeper/kyverno.
SYN avatar
hk flag
SYN
You "may" be able to set nodeSelectors at the namespace lever, as an annotation. See https://kubernetes.io/docs/reference/labels-annotations-taints/#schedulerkubernetesnode-selector .Although that one relies on some controller that is disabled by deefault. Sidenote: if you're using OpenShift, they have something similar, that works OOB. Otherwise: yes, you may have to implement this, OPA, custom admission/mutation, ...
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.