Score:0

Can't get disk space back after running out of space (and removing some files) in Ubuntu 18.04

in flag

This is driving me crazy! My server run out of space. I cleaned up some files by removing the folders. The amount of free space didn't go up (% wise). This is what I now see:

enter image description here

As you can see, it shows 315gb size, of which 298gb is in use. So why does it show 100% used? The only reason I have the 1.1gb free that you can see if due to removing more files are reboot. Even though I got rid of 15+gb of files before :/

I've tried quite a few things such as lsof +L1:

    COMMAND    PID      USER   FD   TYPE DEVICE SIZE/OFF NLINK  NODE NAME
php-fpm7.  726      root    3u   REG    8,0        0     0   605 /tmp/.ZendSem.sRUIJj (deleted)
mysqld     863     mysql    5u   REG    8,0        0     0  2938 /tmp/ibj2MjTy (deleted)
mysqld     863     mysql    6u   REG    8,0        0     0 10445 /tmp/ibgsRaLu (deleted)
mysqld     863     mysql    7u   REG    8,0        0     0 76744 /tmp/ibx2g3Cq (deleted)
mysqld     863     mysql    8u   REG    8,0        0     0 76750 /tmp/ib7D93oi (deleted)
mysqld     863     mysql   12u   REG    8,0        0     0 77541 /tmp/ibSr0xre (deleted)
dovecot   1278      root  139u   REG   0,23        0     0  2021 /run/dovecot/login-master-notify6ae65d15ebbecfbf (deleted)
dovecot   1278      root  172u   REG   0,23        0     0  2022 /run/dovecot/login-master-notify4b18cb63ddb75aab (deleted)
dovecot   1278      root  177u   REG   0,23        0     0  2023 /run/dovecot/login-master-notify05ff81e3cea47ffa (deleted)
cron      2239      root    5u   REG    8,0        0     0  1697 /tmp/#1697 (deleted)
cron      2240      root    5u   REG    8,0        0     0 77563 /tmp/#77563 (deleted)
sh        2243      root   10u   REG    8,0        0     0  1697 /tmp/#1697 (deleted)
sh        2243      root   11u   REG    8,0        0     0  1697 /tmp/#1697 (deleted)
sh        2244      root   10u   REG    8,0        0     0 77563 /tmp/#77563 (deleted)
sh        2244      root   11u   REG    8,0        0     0 77563 /tmp/#77563 (deleted)
imap-logi 2512  dovenull    4u   REG   0,23        0     0  2023 /run/dovecot/login-master-notify05ff81e3cea47ffa (deleted)
imap-logi 3873  dovenull    4u   REG   0,23        0     0  2023 /run/dovecot/login-master-notify05ff81e3cea47ffa (deleted)
pop3-logi 3915  dovenull    4u   REG   0,23        0     0  2021 /run/dovecot/login-master-notify6ae65d15ebbecfbf (deleted)
pop3-logi 3917  dovenull    4u   REG   0,23        0     0  2021 /run/dovecot/login-master-notify6ae65d15ebbecfbf (deleted)
php-fpm7. 4218    fndesk    3u   REG    8,0        0     0   605 /tmp/.ZendSem.sRUIJj (deleted)
php-fpm7. 4268 executive    3u   REG    8,0        0     0   605 /tmp/.ZendSem.sRUIJj (deleted)

But I can't see anything in there that is locking the files up

Michael Hampton avatar
cz flag
Restart the programs holding those files open, or reboot the computer.
Andrew Newby avatar
in flag
@MichaelHampton thanks, but I've already tried a full server reboot multiple times :( It just doesn't seem to want to give it up!
Michael Hampton avatar
cz flag
You need to delete more files, then.
in flag
Does this answer your question? [Disk full, du tells different. How to further investigate?](https://serverfault.com/questions/275206/disk-full-du-tells-different-how-to-further-investigate)
Andrew Newby avatar
in flag
@MichaelHampton I shouldn't need to. The server was running fine and had loads of spare space before it run out. I uploaded a large file, and then it crashed on me (well, kept telling me "out of disk space", as it was). But even after deleting that file it didn't change the % of free space. The only other option is for me to update the server to a later version and move all the files over - and I can guarantee that will fix it (but its days of work, for something that shouldn't even be an issue :( )
Michael Hampton avatar
cz flag
Something is filling up your disk. You can continue to investigate it, or don't, that's your choice.
Andrew Newby avatar
in flag
@MichaelHampton I'm trying ;) But it still makes no sense. `/dev/sda 315G 296G 2.9G 100% /` - 315gb - 296gb = 19gb... yet "available" space only shows as 2.9gb .. so something is swallowing up that space
Michael Hampton avatar
cz flag
You mean the 5% root reservation?
Andrew Newby avatar
in flag
@MichaelHampton hmmm ok well that makes more sense - 16gb + the 2.9gg. I didn't realise it had a reservation?
Michael Hampton avatar
cz flag
Most Unix filesystems have done since time immemorial, though it has fallen out of favor and more modern filesystems no longer do it.
Andrew Newby avatar
in flag
@MichaelHampton ah ok maybe thats why I've not noticed it before. Most of the other servers are UB 20.04, but I'ven to really had any issues with disk space on those as they have less sites
Score:2
in flag

Find out what is eating up the disk space, and then find out why, before deleting something.

To show the "Top 10 directories", you could use du -Sh / | sort -rh | head -10.

To show the "Top 10" files", you could use find / -type f -exec du -Sh {} + | sort -rh | head -n 10.

Often you will find huge or not rotated log files, of fast-filling log files. Depending on your findings, it is sometimes enough to delete some older log files, or to configure log rotate, or to configure the log-settings of your services.

Regarding your calculation: This does not have to drive you crazy :-)

Often filesystems reserve 5% space for use by the root user. You have 315G disk size, so 5% would be ~16G reserved space. There is a nice article which explains the background: https://blog.tinned-software.net/utility-df-shows-inconsistent-calculation-for-ext-filesystems/

Andrew Newby avatar
in flag
Thanks for that. Actually, it looks like a bulk of its a selection of mySQL tables (6+gb some of them , but they do have millions of rows). I'll see if I can find a way to optimize any of them, as that'd be a quick win
Andrew Newby avatar
in flag
I also found somewhere else that the memory was going. We use InnoDB mySQL tables, and where we had done large "deletes", the IBD files had not gone down in size. Apparently this is normal behaviour for InnoDB. The way around it is to copy the table and then rename it. This took a 16gb file, down to just over 10gb :)
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.