Score:2

How to limit root disk space for pod

gb flag

I have pod deployed on a node with 100gb volume. I only want a pod to have 50gb root disk space.

So I have such config in deployment.yaml:

         resources:
            requests:
              ephemeral-storage: "50G"
            limits:
              ephemeral-storage: "70G"

But when I checked the container (there is only 1 container in the pod) I saw all the disk space on the node was allocated to the pod. Read from here, I thought ephemeral-storage controls how much disk space is allocated to the pod.

# df -h
Filesystem      Size  Used Avail Use% Mounted on
overlay         100G  6.5G   94G   7% /
tmpfs            64M     0   64M   0% /dev
tmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/nvme0n1p1  100G  6.5G   94G   7% /etc/hosts
shm              64M     0   64M   0% /dev/shm
tmpfs           1.9G     0  1.9G   0% /proc/acpi
tmpfs           1.9G     0  1.9G   0% /sys/firmware

Any idea what I did wrong?

user3908406 avatar
gb flag
I just edited the question. I meant all the 100gb was allocated to the pod but I only want pod to have 50gb root disk
Score:2
cn flag

The fact that the whole space (/) is visible to you doesn't mean that the space is available/allocatable.

Kubelet will monitor the usage of the ephemeral storage of your Pod and act accordingly (by evicting the Pod that exceeded the limit):

Ephemeral storage consumption management

If the kubelet is managing local ephemeral storage as a resource, then the kubelet measures storage use in:

  • emptyDir volumes, except tmpfs emptyDir volumes
  • directories holding node-level logs
  • writeable container layers If a Pod is using more ephemeral storage than you allow it to, the kubelet sets an eviction signal that triggers Pod eviction.

-- Kubernetes.io: Docs: Concepts: Configuration: Manage resources containers: Resource ephemeral storage consumption


Please take a look on below example:

Assuming that you have a following Pod manifest:

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx
    image: nginx
    resources:
      requests:
        ephemeral-storage: "2Gi"
      limits:
        ephemeral-storage: "4Gi"

The limit that is configured for ephemeral storage is 4Gi. To check what will happen you can run:

  • kubectl exec -it nginx -- fallocate -l 10GB /evict.img

A side note!

fallocate is used to manipulate the allocated disk space for a file, either to deallocate or preallocate it.

-- Man7.org: Linux: Man pages: Fallocate

After some time you should see:

  • $ kubectl describe pod nginx
Name:         nginx
Namespace:    default
Priority:     0
Node:         XYZ
Start Time:   Mon, 05 Jul 2021 09:47:08 +0200
Labels:       <none>
Annotations:  <none>
Status:       Failed # <-- IMPORTANT!
Reason:       Evicted  # <-- IMPORTANT!
Message:      Pod ephemeral local storage usage exceeds the total limit of containers 4Gi.  # <-- IMPORTANT!
<-- REDACTED --> 
    Limits:
      ephemeral-storage:  4Gi # <-- IMPORTANT!
    Requests:
      ephemeral-storage:  2Gi
<-- REDACTED --> 
Events:
  Type     Reason     Age   From               Message
  ----     ------     ----  ----               -------
  Normal   Scheduled  32s   default-scheduler  Successfully assigned default/nginx to XYZ
  Normal   Pulling    31s   kubelet            Pulling image "nginx"
  Normal   Pulled     31s   kubelet            Successfully pulled image "nginx" in 360.098619ms
  Normal   Created    31s   kubelet            Created container nginx
  Normal   Started    31s   kubelet            Started container nginx
  Warning  Evicted    3s    kubelet            Pod ephemeral local storage usage exceeds the total limit of containers 4Gi. # <-- IMPORTANT!
  Normal   Killing    3s    kubelet            Stopping container nginx  # <-- IMPORTANT!
  • $ kubectl get pods
NAME    READY   STATUS    RESTARTS   AGE
nginx   0/1     Evicted   0          9m7s

Additional resources:

user3908406 avatar
gb flag
Thanks, what if I have 3 pods each requests ephemeral-storage of 40gb, and 2 nodes each with 100gb volume, assuming CPU and memory are out if concern, would kubenetes schedule 2 pods on 1 node and 1 Pod on the other node to avoid eviction?
Dawid Kruk avatar
cn flag
@user3908406 In short yes, Kubernetes by requests will ensure the resource allocation by the allocatable resource and `requests` (if it can't, `Pod` will remain in `Pending` state). On the topic of available ephemeral storage I'd check: `$ kubectl describe node` where you can find `limits`/`requests` on the particular `Node`. Apart from that I'd reckon Kubernetes would be balancing the `Pods` across the `Nodes` to ensure high availability in case of `Node` failure.
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.