Score:0

df -h --total command showing wrong total disk size

om flag

I have 512GB attached to my linux centos 7.9 server I'm trying to know how much total disk size used from overall disk size from inside the server

I tried to use df -h --total command to show the total disk and used percentage but it shows 224GB total disk size and 13% used which is wrong because

In Azure monitoring it's showing 76% from the space used can anyone help with that ?

I tried many commands like fdisk, lsblk,parted.. etc but no accurate results

the full output of "df -h --total":

df -h --total
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs         16G     0   16G   0% /dev
tmpfs            16G     0   16G   0% /dev/shm
tmpfs            16G  136M   16G   1% /run
tmpfs            16G     0   16G   0% /sys/fs/cgroup
/dev/sda2        30G   25G  5.8G  81% /
/dev/sdb1       126G  4.1G  116G   4% /mnt/resource
shm              64M     0   64M   0% 
total           224G   29G  189G  14% -

lsblk:

NAME   FSTYPE LABEL       UUID                                 MOUNTPOINT    NAME    SIZE OWNER GROUP MODE
sda                                                                          sda      30G root  disk  
├─sda1                                                                       ├─sda1    1M root  disk  
└─sda2 xfs    centos_root 425e9325-f7cd-4d90-8548-4a79e37eb5b6 /             └─sda2   30G root  disk  
sdb                                                                          sdb     128G root  disk  
└─sdb1 ext4               6242553c-4d61-4420-b149-b2a3cb52c912 /mnt/resource └─sdb1  128G root  disk  
sdc                                                                          sdc     512G root  disk  
in flag
Please show the output from those commands. And please don't post screenshots, if possible just copy&paste the text and format it properly.
Mohammed Alkilani avatar
om flag
Here is the output total size: 224GB Used: 29GB , avail: 189GB , use%: 14%
in flag
Please edit the full output of these commands in your question.
Mohammed Alkilani avatar
om flag
Thank you Gerald Schneider I updated the post
vidarlo avatar
ar flag
Can you [edit] your question with the **exact** output of `du -h` and `lsblk`? Do not remove anything, and use the `{}` button to format it.
Score:1
ar flag

The 512GB disk is /dev/sdc, and it's not mounted in your OS, and thus not included in the total shown by df -h.

Mohammed Alkilani avatar
om flag
How can be the total size 224GB if the sdc not mounted ?
diya avatar
la flag
Because `df` includes 4x 16GB = 64GB of tmpfs (in memory) file systems that aren't backed by disk(s)
Mohammed Alkilani avatar
om flag
Thank you very much for explanation but why this happen in azure VM because I already attached the 512GB to the machine
vidarlo avatar
ar flag
You've attached the disk to the VM. You've not told the operating system what to do with the disk. However, teaching basic system administration is probably not in the scope of this site - I would suggest picking up a book on Linux system administration and possibly Azure as well.
Mohammed Alkilani avatar
om flag
Thank you @vidarlo
vidarlo avatar
ar flag
No problem :) And best of luck :)
Score:0
la flag

I think that your Azure monitoring doesn't have a agent reporting on the df disk usage as seen by your VM, but reports on the storage consumption as reported by the storage layer, hence the discrepancy.

Most cloud providers use some form of thin provisioning when assigning storage.

So when you assign a a 512 GB virtual disk to a VM, your VM sees 512GB available, but the actual storage that gets consumed will initially be much closer to 0 GB rather than the allocated 512GB. The empty disk space is not allocated (yet) in the backend, and only once you start writing data to that disk will the actual disk consumption, as measured in the back-end, increase.

In other words: after writing a 100GB to that disk, running df will show 100GB used inside the VM and looking from the storage back-end you will also see that 100GB is used.

When you delete 80 GB of your files something interesting can happen:
running df will show only 20GB remains used inside the VM but looking from the storage back-end it will report that there is still 100GB in use. That is because some/many storage back-ends can't reclaim thin-provisioned storage once it has been allocated. Allocated storage can only be reclaimed when the complete virtual disk gets deleted but it can't reclaim storage when the VM deletes files/data on the virtual disks.

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.