Hi had and still have some very strange problems with a Server Ubuntu 18 LTS and LVM
First only 4GB seemed to have been used while creating the host during setup, which was dedected much later after a year of use and lead to a full root drive.
The host has been provisioned 200G of space in VMware but the root used only 4G:
It looked something like this (Sadly I don't have the real numbers anymore for this first part this is more or less how I remember it):
root@somehostname:~# df -h --total
Filesystem Size Used Avail Use% Mounted on
udev 1,9G 0 1,9G 0% /dev
tmpfs 395M 6,1M 389M 2% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 3,9G 3,9G 0 100% /
tmpfs 2,0G 0 2,0G 0% /dev/shm
tmpfs 5,0M 0 5,0M 0% /run/lock
tmpfs 2,0G 0 2,0G 0% /sys/fs/cgroup
/dev/sda2 976M 224M 685M 25% /boot
tmpfs 395M 0 395M 0% /run/user/1000
/dev/loop2 99M 99M 0 100% /snap/core/11081
/dev/loop1 100M 100M 0 100% /snap/core/11167
I "fixed" that using the following commands but only gained 46G of space beside it stating it "changed from 52,75 GiB (13504 extents) to <199,00 GiB (50943 extents)":
(At this pointed I started copying every change I made so this are real numbers and results)
root@somehostname:/var/log# lvresize -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv
/etc/lvm/archive/.lvm_trac_2626_513583731: write error failed: No space left on device
root@somehostname:/var/log# resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv
resize2fs 1.44.1 (24-Mar-2018)
Filesystem at /dev/mapper/ubuntu--vg-ubuntu--lv is mounted on /; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 7
The filesystem on /dev/mapper/ubuntu--vg-ubuntu--lv is now 13828096 (4k) blocks long.
root@somehostname:/var/log# lvresize -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv
Size of logical volume ubuntu-vg/ubuntu-lv changed from 52,75 GiB (13504 extents) to <199,00 GiB (50943 extents).
Logical volume ubuntu-vg/ubuntu-lv successfully resized.
root@somehostname:/var/log# df -h --total
Filesystem Size Used Avail Use% Mounted on
udev 1,9G 0 1,9G 0% /dev
tmpfs 395M 6,1M 389M 2% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 52G 3,9G 46G 8% /
tmpfs 2,0G 0 2,0G 0% /dev/shm
tmpfs 5,0M 0 5,0M 0% /run/lock
tmpfs 2,0G 0 2,0G 0% /sys/fs/cgroup
/dev/loop0 98M 98M 0 100% /snap/core/9993
/dev/sda2 976M 224M 685M 25% /boot
tmpfs 395M 0 395M 0% /run/user/1000
/dev/loop2 99M 99M 0 100% /snap/core/11081
total 60G 4,4G 53G 8% -
root@somehostname:/var/log# lvresize -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv
New size (50943 extents) matches existing size (50943 extents).
root@somehostname:/var/log# df -h --total
Filesystem Size Used Avail Use% Mounted on
udev 1,9G 0 1,9G 0% /dev
tmpfs 395M 6,1M 389M 2% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 52G 3,9G 46G 8% /
tmpfs 2,0G 0 2,0G 0% /dev/shm
tmpfs 5,0M 0 5,0M 0% /run/lock
tmpfs 2,0G 0 2,0G 0% /sys/fs/cgroup
/dev/loop0 98M 98M 0 100% /snap/core/9993
/dev/sda2 976M 224M 685M 25% /boot
tmpfs 395M 0 395M 0% /run/user/1000
/dev/loop2 99M 99M 0 100% /snap/core/11081
total 60G 4,4G 53G 8% -
lvscan and pvscan state ~200G
lvscan
ACTIVE '/dev/ubuntu-vg/ubuntu-lv' [<199,00 GiB] inherit
pvscan
PV /dev/sda3 VG ubuntu-vg lvm2 [<199,00 GiB / 0 free]
Total: 1 [<199,00 GiB] / in use: 1 [<199,00 GiB] / in no VG: 0 [0 ]
vgscan states only one VG
vgscan
Reading volume groups from cache.
Found volume group "ubuntu-vg" using metadata type lvm2
anyone any ideas of what is goning on here? Why isn't it using all the space of the lvm ? What am I missing? Why was it using 4G to beginn with when it had 52,75 GiB and only uses this amount of space now beside it actually should use 199GiB