It just means the sum of their pods requests.memory value can't
exceed 8Gb
Yes, this is the logic for ResourceQuota. From Understanding Resource Quotas:
Resource quotas work like this:
Users put compute resource requests on their pods. The sum of all resource requests across all pods in the same namespace must not
exceed any hard resource limit in any Resource Quota document for the
namespace. Note that we used to verify Resource Quota by taking the
sum of resource limits of the pods, but this was altered to use
resource requests. Backwards compatibility for those pods previously
created is preserved because pods that only specify a resource limit
have their resource requests defaulted to match their defined limits.
The user is only charged for the resources they request in the
Resource Quota versus their limits because the request is the minimum
amount of resource guaranteed by the cluster during scheduling. For
more information on over commit, see compute-resources.
If creating a pod would cause the namespace to exceed any of the limits specified in the the Resource Quota for that namespace, then
the request will fail with HTTP status code 403 FORBIDDEN.
If quota is enabled in a namespace and the user does not specify requests on the pod for each of the resources for which quota is
enabled, then the POST of the pod will fail with HTTP status code 403
FORBIDDEN. Hint: Use the LimitRange admission controller to force
default values of limits (then resource requests would be equal to
limits by default, see admission controller) before the quota is
checked to avoid this problem.
However article expand a bit a cases, where you need divide resources separately. And this is not something already implemented..
Quota and Cluster Capacity:
Sometimes more complex policies may be desired, such as:
- proportionally divide total cluster resources among several teams.
- allow each tenant to grow resource usage as needed, but have a generous limit to prevent accidental resource exhaustion.
- detect demand from one namespace, add nodes, and increase quota.
Such policies could be implemented using ResourceQuota as a building-block, by writing a 'controller' which watches the quota usage and adjusts the quota hard limits of each namespace according to other signals.
I expect you need write custom logic in own controller.
Please take a look also at How To Force Kubernetes Namespaces To Have ResourceQuotas Using OPA