Score:2

Can one VG (Volume Group) contain different types of PVs (Physical Volumes)?

no flag

The background is we have our database data directory on an LV which is located on a VG. We want to use the LVM's snapshot function to backup our database data, but this VG has no free PEs left. We also could not shrink the LV (we use XFS). The VG has two PVs, with each PV a RAID 10 (each with 8 hard drives). We have two extra spare hard drives (for hot failover, the size is the same with that consist the RAID10 PVs).

I am considering taking one of the spare drives and create a PV on it and add it to the VG hosting the database data directory. Does this work? Could a VG contain different types of PVs, e.g. 2 RAID 10 PVs and 1 no-RAID hard disk?

Nikita Kipriyanov avatar
za flag
There is one restriction which wasn't mentioned by anyone yet. You can't mix devices with different reported *physical sector size* in a single VG. For instance, if you format NVMe SSD with 4K sectors, you can't put it into the same VG as most other "legacy" storage that reports 512-byte sectors.
Ke Lu avatar
no flag
@NikitaKipriyanov I did heard something like this somewhere, thanks for mentioning it. All our hard disks are the same type, so I think it's not a problem here.
Nikita Kipriyanov avatar
za flag
I mean, I had two *exactly same* SSDs, same model and same firmware version, but configured differently and it was impossible to add them into MD RAID1 or LVM VG together. One was formatted with 4K sectors, other one with 512 bytes. So "of same type" means nothing, you need to make sure they have compartible configuration, this is what I wanted to convey. I don't know if this could happen with HDDs.
Ke Lu avatar
no flag
@NikitaKipriyanov interesting, I though the sector here you mean is the physical sector right, which is typically 512KB(some modern disks may have native 4K sector). I never heard this can be configured for HDD. Maybe it's true for SSD since there is no native sector concept there.
Nikita Kipriyanov avatar
za flag
I meant what I written: physical sector as reported by the device. While this probably couldn't be configured for HDD, they can report different sizes, and this may obstruct tying them together into a single VG. With SSD, there *is* a "native sector" and it's more natural for SSDs — that's erase block; however, you rarely know what's erase block size for the given SSD is.
Score:4
jp flag

Any device node that is a block device can be made a PV and added as member of a VG. For example, I have in one of my computers a hard disk and a solid-state disk members of a single PV and working as cache ( https://man7.org/linux/man-pages/man7/lvmcache.7.html ).

While creating the snapshot LV, you may want to use the RAID10 PVs to host the live database data and the spare disk to store the copy-on-write (COW) snapshot. I suggest doing so because, on an event of failure of the spare disk, the in-progress backup would be lost and not the live database. The exact PV hosting a LV can be specified in the command line that creates the snapshot, as exemplified below.

# lvcreate -L 10G -n backup_snap_lv -s data_vg/data_lv /dev/sdb
# lvdisplay --maps data_vg/backup_snap_lv
Score:2
mx flag

LVM (mostly) does not care about the specifics of the block devices that a VG’s PVs are on. The only major exception is that it may try to pass discard operations down to underlying block devices if they advertise that they support such operations.

For example, on my home-server system I have four 4TB SATA hard drives and two 1TB NVMe SSDs all set up as PVs for the sole VG on the system, which I’ve done because it makes it easier to migrate data when I need to replace a disk (it turns into a simple pvmove command instead of needing to do something like merge VGs).

However, there’s a big caveat to this. Because LVM does not care about the underlying block devices, it’s not smart enough to ‘intelligently’ place new LVs (or in some cases extend existing ones) unless all you care about is minimizing free space fragmentation. As such, you should always explicitly specify what PVs to use when creating new LVs in such a set up so that you get exactly the behavior you expect.


As an aside, assuming those spare drives are true hot spares (that is, the system will automatically use them as replacements in the event of a drive failure in the rest of the array), I would not recommend repurposing one of them unless you desperately need more space and truly don’t care about the possibility of a double drive failure (which is actually relatively likely with that many drives in each array). If you can free up space some other way, that should probably be your first approach to getting more space to work with.

Score:2
in flag

There should be no problem because this is the idea of LVM. On bottom level you have physical storage devices. On them are created PV, then when you add PV(s) to VG, VG do not care what is below, then when you add space to LV/create LV, LV do not care from where VG have this space.

Of course there are special cases when you create mirror (two copies of LV) you can make them reside on different PVs (for very strict mode you will need third PV also)

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.