Score:2

Guarantee ResourceQuota in a Namespace

ng flag

I'm running a cluster that is shared across teams and I'd like to guarantee each team a minimum amount of resources, especially memory.

Following the instructions I've tried using the following on their namespace:

apiVersion: v1
kind: ResourceQuota
metadata:
  name: mem-quota
spec:
  hard:
    requests.memory: 8Gb

However from reading more docs, it turns out this doesn't guarantee they have 8Gb of memory for their pods. It just means the sum of their pods requests.memory value can't exceed 8Gb. It's possible that they could have 8Gb set as above, only be using 4Gb and be unable to create a new pod if the cluster was maxed out elsewhere and the new pod couldn't be scheduled.

Also for example, I can create a ResourceQuota with a requests.memory value of 16Gi on a cluster with only 8Gi of total memory.

Is there anyway to guarantee a team a fixed amount of memory for only their use?

Score:0
cn flag
Vit

It just means the sum of their pods requests.memory value can't exceed 8Gb

Yes, this is the logic for ResourceQuota. From Understanding Resource Quotas:

Resource quotas work like this:

  • Users put compute resource requests on their pods. The sum of all resource requests across all pods in the same namespace must not exceed any hard resource limit in any Resource Quota document for the namespace. Note that we used to verify Resource Quota by taking the sum of resource limits of the pods, but this was altered to use resource requests. Backwards compatibility for those pods previously created is preserved because pods that only specify a resource limit have their resource requests defaulted to match their defined limits. The user is only charged for the resources they request in the Resource Quota versus their limits because the request is the minimum amount of resource guaranteed by the cluster during scheduling. For more information on over commit, see compute-resources.

  • If creating a pod would cause the namespace to exceed any of the limits specified in the the Resource Quota for that namespace, then the request will fail with HTTP status code 403 FORBIDDEN.

  • If quota is enabled in a namespace and the user does not specify requests on the pod for each of the resources for which quota is enabled, then the POST of the pod will fail with HTTP status code 403 FORBIDDEN. Hint: Use the LimitRange admission controller to force default values of limits (then resource requests would be equal to limits by default, see admission controller) before the quota is checked to avoid this problem.


However article expand a bit a cases, where you need divide resources separately. And this is not something already implemented..

Quota and Cluster Capacity: Sometimes more complex policies may be desired, such as:

  • proportionally divide total cluster resources among several teams.
  • allow each tenant to grow resource usage as needed, but have a generous limit to prevent accidental resource exhaustion.
  • detect demand from one namespace, add nodes, and increase quota.

Such policies could be implemented using ResourceQuota as a building-block, by writing a 'controller' which watches the quota usage and adjusts the quota hard limits of each namespace according to other signals.


I expect you need write custom logic in own controller.

Please take a look also at How To Force Kubernetes Namespaces To Have ResourceQuotas Using OPA

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.