Score:0

Disk size does not match at all (ubuntu/VMWare)

in flag

I setup a ubuntu server on VMWare with 50GB.

Now my ubuntu says I don't have enough space. df -h indeed tells me my ubuntu only has 24G...

buzz@plex-vm:~$ df -h
Filesystem                         Size  Used Avail Use% Mounted on
udev                               1.9G     0  1.9G   0% /dev
tmpfs                              391M   17M  375M   5% /run
/dev/mapper/ubuntu--vg-ubuntu--lv   24G   24G     0 100% /
tmpfs                              2.0G  4.0K  2.0G   1% /dev/shm
tmpfs                              5.0M     0  5.0M   0% /run/lock
tmpfs                              2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/loop0                          56M   56M     0 100% /snap/core18/2128
/dev/loop1                         183M  183M     0 100% /snap/microk8s/2407
/dev/loop2                          71M   71M     0 100% /snap/lxd/21029
/dev/loop3                          33M   33M     0 100% /snap/snapd/12704
/dev/sda2                          976M  107M  803M  12% /boot
tmpfs                              391M     0  391M   0% /run/user/1000
shm                                 64M     0   64M   0% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes/51409ef32a730a343a26395556584edd5bd17873c10109d3a6fa1a4d3c911141/shm
overlay                             24G   24G     0 100% /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v2.task/k8s.io/51409ef32a730a343a26395556584edd5bd17873c10109d3a6fa1a4d3c911141/rootfs
shm                                 64M     0   64M   0% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes/b61c7a372768621c65b2df55294be4349f2390ec6638189a5fcc95e2c859427e/shm
overlay                             24G   24G     0 100% /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v2.task/k8s.io/b61c7a372768621c65b2df55294be4349f2390ec6638189a5fcc95e2c859427e/rootfs
overlay                             24G   24G     0 100% /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v2.task/k8s.io/07426bdff7fccf7e772ca7f89044ab2802f6fe223feb23756118c30969610d0b/rootfs
overlay                             24G   24G     0 100% /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v2.task/k8s.io/823bca245e00413131fb57eb002442e288cde003bd6556dd50a306229cfa74e1/rootfs  

buzz@plex-vm:~$ sudo parted
GNU Parted 3.3
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: VMware, VMware Virtual S (scsi)
Disk /dev/sda: 53.7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  2097kB  1049kB                     bios_grub
 2      2097kB  1076MB  1074MB  ext4
 3      1076MB  53.7GB  52.6GB

vgdisplay gives me the following:

buzz@plex-vm:~$ sudo vgdisplay
--- Volume group ---
VG Name               ubuntu-vg
System ID
Format                lvm2
...
VG Access             read/write
VG Status             resizable
...
VG Size               <49.00 GiB
PE Size               4.00 MiB
Total PE              12543
Alloc PE / Size       6272 / 24.50 GiB
Free  PE / Size       6271 / <24.50 GiB
...

But when I try to extend it says there's no space left on disk, no matter how much I want to extend it by...

buzz@plex-vm:~$ sudo lvextend -L +2G /dev/mapper/ubuntu--vg-ubuntu--lv
/etc/lvm/archive: mkdir failed: No space left on device

buzz@plex-vm:~$ sudo lvextend -L +24G /dev/mapper/ubuntu--vg-ubuntu--lv
/etc/lvm/archive: mkdir failed: No space left on device

buzz@plex-vm:~$ sudo lvextend -l+100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv
/etc/lvm/archive: mkdir failed: No space left on device

And cfdisk gives me the following:

                                                      Disk: /dev/sda
                                Size: 50 GiB, 53687091200 bytes, 104857600 sectors
                           Label: gpt, identifier: 4852B6EA-3BC7-4801-9D11-CED0A0B14B73

Device                           Start                 End             Sectors          Size Type
/dev/sda1                         2048                4095                2048            1M BIOS boot                    /dev/sda2                         4096             2101247             2097152            1G Linux filesystem
/dev/sda3                      2101248           104855551           102754304           49G Linux filesystem

My SSD, which the VM is on, has more than enough free space and in Windows the whole SSD is one partition.

Thanks for the help.

Charles Green avatar
cn flag
It looks as though when you created your logical volume group, you created it as not the full disk space available. This was answered in [this answer], which is unfortunately a trivial link to [https://www.linuxtechi.com/extend-lvm-partitions/](https://www.linuxtechi.com/extend-lvm-partitions/)
Capt. Mustard avatar
in flag
Thanks for reply. I updated my post to state what happens when trying to do what is described in the link.
Charles Green avatar
cn flag
I'm not really conversant with LV's, as I don't actually have a reason to use them on my laptop. I would check the output of `lvs -a`, to see if this has perhaps been setup as two logical volumes in a mirror arrangement?
Score:2
in flag

Well I managed to get it to extend the LV with

lvextend -l+100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv -A n :)

It was the autobackup that blocked the LV to extend because it was so full. So -A n sets the autobackup to no in case somebody encounters the same problem.

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.