I have searched serverfault, SO and other sites but could not find a clear answer. I have done some reading on the basics of Linux storage and filesystems, but I'm still unclear about how to solve my problem.
My aim is to do a simple assessment of disk space and usage across servers in our environment. We will run a bash script that includes df -k command on each server. The output text will be gathered for parsing and analysis. I'm having trouble understanding how to assess the df -k output correctly to arrive at the total disk space and usage.
For now, we are ignoring networked and LVM mapped storage (though I suspect they will be more involved and complicated than this situaion). I'll deal with these in the near future. For now, I'm having trouble understanding simple scenarios.
Scenario 1:
I created an Oracle Linux 7.9 VM in Oracle Cloud with a default boot volume of 46GB. The df -h output returned the following:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.6G 0 7.6G 0% /dev
tmpfs 7.6G 0 7.6G 0% /dev/shm
tmpfs 7.6G 8.7M 7.6G 1% /run
tmpfs 7.6G 0 7.6G 0% /sys/fs/cgroup
/dev/sda3 39G 2.8G 36G 8% /
/dev/sda1 200M 7.4M 193M 4% /boot/efi
tmpfs 1.6G 0 1.6G 0% /run/user/0
tmpfs 1.6G 0 1.6G 0% /run/user/994
tmpfs 1.6G 0 1.6G 0% /run/user/1000
Question 1: What consistent logic could I apply to calculate the total disk space and usage? In this case, I can see that (sda3 + one entry of 7.6GB tmpfs) will get me to 46GB, so should I ignore subsequent 7.6G tmpfs entries and all 1.6G tmpfs entries? Should I simply ignore all tmpfs entries, given that tmpfs is volatile and not real storage? In which case, how would I arrive at 46GB total storage?
Scenario 2:
I created an Oracle Linux 7.9 VM in Oracle Cloud with a default boot volume of 200GB. The df -h output returned the following:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 30G 0 30G 0% /dev
tmpfs 30G 0 30G 0% /dev/shm
tmpfs 30G 8.8M 30G 1% /run
tmpfs 30G 0 30G 0% /sys/fs/cgroup
/dev/sda3 39G 3.5G 35G 9% /
/dev/sda1 200M 7.4M 193M 4% /boot/efi
tmpfs 5.9G 0 5.9G 0% /run/user/0
tmpfs 5.9G 0 5.9G 0% /run/user/994
tmpfs 5.9G 0 5.9G 0% /run/user/1000
Question 2: This is more confusing. How would I arrive at 200GB total disk size? It appears I'd need to count sdaX AND (all tmpfs entries) to get to 200GB. I'm having trouble finding a consistent logic for both scenarios.
I hope my questions are clear. I'd be glad to provide any additional details and/or clarifications.