Score:1

LVM not showing LV although it's in use

aq flag

I have a machine with a RAID-1 (sda) hosting Debian 10 and a RAID-5 being used for storage (sdb), both using independent PGs. Recently the RAID-5 was corrupted, so I recreated it and set up the LVM again:

pvcreate /dev/sdb1
vgcreate "server-h01-space" /dev/sdb1
lvcreate -n "storage" -L 20.5T server-h01-space

During setup pvcreate and vgcreate reported an existing XFS signature and offered to wipe it. I declined in the hope that the filesystem could be recovered. However, lvcreate failed as it could not find a volume group. I rebooted the machine and ran the commands again, this time wiping the XFS signature. After that I set up the filesystem on /dev/mapper/server--h01--space-storage and mounted it.

However, while everything seems to have worked fine, all LVM commands only show what has been created on sda. I.e. the *display commands report

# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda2
  VG Name               server-h01
  PV Size               134.75 GiB / not usable 0
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              34497
  Free PE               7873
  Allocated PE          26624
  PV UUID               aaaaa-bbbb-cccc-dddd-eeee-ffff-ggggg

# vgdisplay
  --- Volume group ---
  VG Name               server-h01
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  23
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                4
  Open LV               4
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               134.75 GiB
  PE Size               4.00 MiB
  Total PE              34497
  Alloc PE / Size       26624 / 104.00 GiB
  Free  PE / Size       7873 / 30.75 GiB
  VG UUID               11111-2222-3333-4444-5555-6666-7777-88888

# lvdisplay
  --- Logical volume ---
  LV Path                /dev/server-h01/root
  LV Name                root
  VG Name                server-h01
  LV UUID                ***
  LV Write Access        read/write
  LV Creation host, time Microknoppix, 2019-10-09 10:48:26 +0200
  LV Status              available
  # open                 1
  LV Size                25.00 GiB
  Current LE             6400
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:0

  --- Logical volume ---
  LV Path                /dev/server-h01/tmp
  LV Name                tmp
  VG Name                server-h01
  LV UUID                ***
  LV Write Access        read/write
  LV Creation host, time Microknoppix, 2019-10-09 10:48:37 +0200
  LV Status              available
  # open                 1
  LV Size                7.00 GiB
  Current LE             1792
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:2

  --- Logical volume ---
  LV Path                /dev/server-h01/var
  LV Name                var
  VG Name                server-h01
  LV UUID                ***
  LV Write Access        read/write
  LV Creation host, time Microknoppix, 2019-10-09 10:48:53 +0200
  LV Status              available
  # open                 1
  LV Size                63.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:3

  --- Logical volume ---
  LV Path                /dev/server-h01/log
  LV Name                log
  VG Name                server-h01
  LV UUID                ***
  LV Write Access        read/write
  LV Creation host, time server-h01, 2019-10-13 21:40:41 +0200
  LV Status              available
  # open                 1
  LV Size                9.00 GiB
  Current LE             2304
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:6

This is contradicted by the output of mount

# mount|grep -e "^\/dev"
/dev/mapper/server--h01-root on / type xfs (rw,relatime,attr2,inode64,noquota)
/dev/sda1 on /boot type xfs (rw,relatime,attr2,inode64,noquota)
/dev/mapper/server--h01-tmp on /tmp type xfs (rw,relatime,attr2,inode64,usrquota,grpquota)
/dev/mapper/server--h01-var on /var type xfs (rw,relatime,attr2,inode64,usrquota,grpquota)
/dev/mapper/server--h01-log on /var/log type xfs (rw,noexec,relatime,attr2,inode64,noquota)
/dev/mapper/server--h01--space-storage on /storage type xfs (rw,noexec,relatime,attr2,inode64,usrquota,grpquota)

I am afraid that the LVM is not persistent and will be lost with the next reboot. Unfortunately I discovered the issue only after /storage was populated with several TB of data.

Any ideas why mount does not agree with pvdisplay?

Edit: Output of *scan commands as requested:

# pvscan
  PV /dev/sda2   VG server-h01      lvm2 [134.75 GiB / 30.75 GiB free]
  Total: 1 [134.75 GiB] / in use: 1 [134.75 GiB] / in no VG: 0 [0   ]

# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "server-h01" using metadata type lvm2

# lvscan
  ACTIVE            '/dev/server-h01/root' [25.00 GiB] inherit
  ACTIVE            '/dev/server-h01/tmp' [7.00 GiB] inherit
  ACTIVE            '/dev/server-h01/var' [63.00 GiB] inherit
  ACTIVE            '/dev/server-h01/log' [9.00 GiB] inherit

Edit: These are the relevant logs from /etc/lvm/archive

# Generated by LVM2 version 2.03.02(2) (2018-12-18): Sun May 14 15:56:19 2023

contents = "Text Format Volume Group"
version = 1

description = "Created *before* executing 'vgcreate server-h01-space /dev/sdb1'"

creation_host = "server-h01"    # Linux server-h01 4.19.0-24-amd64 #1 SMP Debian 4.19.282-1 (2023-04-29) x86_6
4
creation_time = 1684072579      # Sun May 14 15:56:19 2023

server-h01-space {
        id = "***"
        seqno = 0
        format = "lvm2"                 # informational
        status = ["RESIZEABLE", "READ", "WRITE"]
        flags = []
        extent_size = 8192              # 4 Megabytes
        max_lv = 0
        max_pv = 0
        metadata_copies = 0

        physical_volumes {

                pv0 {
                        id = "***"
                        device = "/dev/sdb1"    # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 58599665664  # 27.2876 Terabytes
                        pe_start = 2048
                        pe_count = 7153279      # 27.2876 Terabytes
                }
        }


}


# Generated by LVM2 version 2.03.02(2) (2018-12-18): Sun May 14 15:56:43 2023

contents = "Text Format Volume Group"
version = 1

description = "Created *before* executing 'lvcreate -n server -L 20.5T server-h01-space'"

creation_host = "server-h01"    # Linux server-h01 4.19.0-24-amd64 #1 SMP Debian 4.19.282-1 (2023-04-29) x86_6
4
creation_time = 1684072603      # Sun May 14 15:56:43 2023

server-h01-space {
        id = "***"
        seqno = 1
        format = "lvm2"                 # informational
        status = ["RESIZEABLE", "READ", "WRITE"]
        flags = []
        extent_size = 8192              # 4 Megabytes
        max_lv = 0
        max_pv = 0
        metadata_copies = 0

        physical_volumes {

                pv0 {
                        id = "***"
                        device = "/dev/sdb1"    # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 58599665664  # 27.2876 Terabytes
                        pe_start = 2048
                        pe_count = 7153279      # 27.2876 Terabytes
                }
        }


}
br flag
Please don't use R5, it's dangerous.
in flag
Please show the output of `pvscan`, `vgscan`, `lvscan`.
in flag
Please run `pvscan --cache /dev/sdb1`
Marcus avatar
aq flag
The output of ```pvscan --cache /dev/sdb1```` is empty
in flag
does `sdb` even show up in `fdisk -l`?
Marcus avatar
aq flag
It does, as does ```/dev/mapper/server--h01--space-storage```. FYI, /dev/sdb is GPT and sdb1 was created with ```parted```
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.