LVM (mostly) does not care about the specifics of the block devices that a VG’s PVs are on. The only major exception is that it may try to pass discard operations down to underlying block devices if they advertise that they support such operations.
For example, on my home-server system I have four 4TB SATA hard drives and two 1TB NVMe SSDs all set up as PVs for the sole VG on the system, which I’ve done because it makes it easier to migrate data when I need to replace a disk (it turns into a simple pvmove
command instead of needing to do something like merge VGs).
However, there’s a big caveat to this. Because LVM does not care about the underlying block devices, it’s not smart enough to ‘intelligently’ place new LVs (or in some cases extend existing ones) unless all you care about is minimizing free space fragmentation. As such, you should always explicitly specify what PVs to use when creating new LVs in such a set up so that you get exactly the behavior you expect.
As an aside, assuming those spare drives are true hot spares (that is, the system will automatically use them as replacements in the event of a drive failure in the rest of the array), I would not recommend repurposing one of them unless you desperately need more space and truly don’t care about the possibility of a double drive failure (which is actually relatively likely with that many drives in each array). If you can free up space some other way, that should probably be your first approach to getting more space to work with.