Had a weird issue happen yesterday on our Ubuntu 18.04 server. One of the team tried to add 30G of space to this LV:
Disk /dev/mapper/rootvg-pgexecstsqllv: 32 GiB, 34359738368 bytes, 67108864 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
The system is supposed to have one volume group (rootvg), but after they proceeded with the lvextend and resize2fs commands, the system stopped showing all values from the pvscan, vgscan, lvscan. Here's an example with vgscan with verbose on. This system is supposed to have one volume group called rootvg. Reading the cache reports "No volume groups found."
vgscan -vv
devices/global_filter not found in config: defaulting to global_filter = [ "a|.*/|" ]
Setting global/locking_type to 1
Setting global/use_lvmetad to 1
global/lvmetad_update_wait_time not found in config: defaulting to 10
Setting response to OK
Setting protocol to lvmetad
Setting version to 1
Setting global/use_lvmpolld to 1
Setting devices/sysfs_scan to 1
Setting devices/multipath_component_detection to 1
Setting devices/md_component_detection to 1
Setting devices/fw_raid_component_detection to 0
Setting devices/ignore_suspended_devices to 0
Setting devices/ignore_lvm_mirrors to 1
devices/filter not found in config: defaulting to filter = [ "a|.*/|" ]
Setting devices/cache_dir to /run/lvm
Setting devices/cache_file_prefix to
devices/cache not found in config: defaulting to /run/lvm/.cache
Setting devices/write_cache_state to 1
Setting global/use_lvmetad to 1
Setting activation/activation_mode to degraded
metadata/record_lvs_history not found in config: defaulting to 0
Setting activation/monitoring to 1
Setting global/locking_type to 1
Setting global/wait_for_locks to 1
File-based locking selected.
Setting global/prioritise_write_locks to 1
Setting global/locking_dir to /run/lock/lvm
Setting global/use_lvmlockd to 0
Locking /run/lock/lvm/P_global WB
Wiping cache of LVM-capable devices
Wiping internal VG cache
Setting response to OK
Setting token to filter:3239235440
Setting daemon_pid to 665
Setting response to OK
Setting global_disable to 0
Reading volume groups from cache.
Setting response to OK
Setting response to OK
Setting response to OK
No volume groups found.
Unlocking /run/lock/lvm/P_global
Setting global/notify_dbus to 1
The commands are reporting that it can no longer see the PVs, LVs, and VGs. Even though the system is still up and running and the device files appear to be intact.
ls -ld /dev/rootvg /dev/rootvg/* /dev/mapper/root* /dev/dm*
brw-rw---- 1 root disk 253, 0 Jul 13 22:07 /dev/dm-0
brw-rw---- 1 root disk 253, 1 Jul 13 22:07 /dev/dm-1
brw-rw---- 1 root disk 253, 10 Jul 13 22:07 /dev/dm-10
brw-rw---- 1 root disk 253, 11 Jul 13 22:07 /dev/dm-11
brw-rw---- 1 root disk 253, 2 Jul 13 22:07 /dev/dm-2
brw-rw---- 1 root disk 253, 3 Jul 13 22:07 /dev/dm-3
brw-rw---- 1 root disk 253, 4 Jul 13 22:07 /dev/dm-4
brw-rw---- 1 root disk 253, 5 Jul 13 22:07 /dev/dm-5
brw-rw---- 1 root disk 253, 6 Jul 13 22:07 /dev/dm-6
brw-rw---- 1 root disk 253, 7 Jul 13 22:07 /dev/dm-7
brw-rw---- 1 root disk 253, 8 Jul 13 22:07 /dev/dm-8
brw-rw---- 1 root disk 253, 9 Jul 13 22:07 /dev/dm-9
lrwxrwxrwx 1 root root 7 Jul 13 22:07 /dev/mapper/rootvg-appslv -> ../dm-2
lrwxrwxrwx 1 root root 7 Jul 13 22:07 /dev/mapper/rootvg-execdirstsqllv -> ../dm-6
lrwxrwxrwx 1 root root 7 Jul 13 22:07 /dev/mapper/rootvg-execstsqlddlv -> ../dm-8
lrwxrwxrwx 1 root root 8 Jul 13 22:07 /dev/mapper/rootvg-inbacklv -> ../dm-10
lrwxrwxrwx 1 root root 7 Jul 13 22:07 /dev/mapper/rootvg-nodeapplv -> ../dm-9
lrwxrwxrwx 1 root root 7 Jul 13 22:07 /dev/mapper/rootvg-pgarcloglv -> ../dm-4
lrwxrwxrwx 1 root root 7 Jul 13 22:07 /dev/mapper/rootvg-pgbackuplv -> ../dm-3
lrwxrwxrwx 1 root root 7 Jul 13 22:07 /dev/mapper/rootvg-pgexecstsqllv -> ../dm-5
lrwxrwxrwx 1 root root 7 Jul 13 22:07 /dev/mapper/rootvg-rootlv -> ../dm-0
lrwxrwxrwx 1 root root 8 Jul 13 22:07 /dev/mapper/rootvg-tempfslv -> ../dm-11
lrwxrwxrwx 1 root root 7 Jul 13 22:07 /dev/mapper/rootvg-tmplv -> ../dm-1
lrwxrwxrwx 1 root root 7 Jul 13 22:07 /dev/mapper/rootvg-workdirlv -> ../dm-7
drwxr-xr-x 2 root root 280 Jul 13 20:32 /dev/rootvg
lrwxrwxrwx 1 root root 7 Jul 13 22:07 /dev/rootvg/appslv -> ../dm-2
lrwxrwxrwx 1 root root 7 Jul 13 22:07 /dev/rootvg/execdirstsqllv -> ../dm-6
lrwxrwxrwx 1 root root 7 Jul 13 22:07 /dev/rootvg/execstsqlddlv -> ../dm-8
lrwxrwxrwx 1 root root 8 Jul 13 22:07 /dev/rootvg/inbacklv -> ../dm-10
lrwxrwxrwx 1 root root 7 Jul 13 22:07 /dev/rootvg/nodeapplv -> ../dm-9
lrwxrwxrwx 1 root root 7 Jul 13 22:07 /dev/rootvg/pgarcloglv -> ../dm-4
lrwxrwxrwx 1 root root 7 Jul 13 22:07 /dev/rootvg/pgbackuplv -> ../dm-3
lrwxrwxrwx 1 root root 7 Jul 13 22:07 /dev/rootvg/pgexecstsqllv -> ../dm-5
lrwxrwxrwx 1 root root 7 Jul 13 22:07 /dev/rootvg/rootlv -> ../dm-0
lrwxrwxrwx 1 root root 8 Jul 13 22:07 /dev/rootvg/tempfslv -> ../dm-11
lrwxrwxrwx 1 root root 7 Jul 13 22:07 /dev/rootvg/tmplv -> ../dm-1
lrwxrwxrwx 1 root root 7 Jul 13 22:07 /dev/rootvg/workdirlv -> ../dm-7
It appears as if the LVM cache or similar has been corrupted or has gone missing. It looks like this has happened to others before, as I found a series of similar questions. This one appeared promising:
https://superuser.com/questions/421896/vgdisplay-and-lvdisplay-no-volume-groups-found
Unfortunately it did not resolve the issue. Does anyone have an idea how to rebuild the LVM cache? Is there anything other than a re-install that can fix this?
Thx,
Steve