BACKGROUND
I currently use a hybrid server-workstation setup for my primary workstation. Basically, just a Fedora base running Linux KVM, on which I then containerize a couple Linux flavors & a Windows Install for use as my Workstations, along with the ability to quickly create containers for use in testing.
The CURRENT Physical Storage layout is as follows:
- Intel 390A Chipset
- iRST as a "hardware" RAID option (Offering RAID-0, 1, 10, 5)
- RST-Controlled, Single 6-port SATA Hub
- 2X 120GB SATA6 SSD's in RAID0
- 2X 500GB SATA6 + 1X 1TB SATA6 HDD's in RAID5 (strange happenings with this)
- 1X 2TB SATA6 HDD Non-RAID
- 1TB NVMe PCIe@4x SSD (not-RST Controlled, since, for like 5 years, there's been an unfixed bug involving both the Linux Kernel and Anaconda that prevents RST controlled NVMe/PCIe drives from being enumerated)
- USB 3.0 Hub
- 1X 3TB 5400RPM + 1X 1TB 5400RPM External USB 3.0 HDD's
- USB 3.1g2 Hub
- 3X 2TB 5400RPM External USB 3.0 HDD's
All USB drives are currently in single disk config
I have a lot of free storage space, and recently, a BIOS update caused the RAID-5 Array to "drop" one of the disks (the 1TB one) and show "degraded" (yet there was no data loss, nor need to rebuild the array, and it still showed as "RST RAID-5 Array", with what was once the third disk showing up as a single disk with a single "RAW" partition, which makes me question exactly how iRST handles a 3 disk RAID-5 array- since if it was actually a RAID-5 from the beginning, then it would have needed the drive replace and to have been rebuilt before any data could have been accessed, making me think it was actually either treating it like a RAID-4, with a dedicated parity disk, rather than striped parity, or possibly something non-standard, like a RAID-0 on the two 500GB drives, RAID-1 mirrored onto the 1TB drive [which actually would be the most optimal config, as it would allow full use of the space with maximal, at least read, performance, while still allowing for a single drive failure like in a RAID-5, although I don't think iRST is "smart" enough to do such a thing... it remains a mystery]; I think BTRFS, and maybe Linux LVM with ZFS, can do stuff like that, or one could do it with a combo of hardware and software RAID, but again, off the subject). The NVMe drive holds all the main OS files, so I decided to copy everything from the internal HDDs to the external HDDs and completely reconfigure the storage. Since iRST is really just a step above software RAID (unless used with those special Intel drives, and even then, idk), I figured using software RAID in Linux was my best option, as it offered a lot more flexibility. I considered btrfs, and it honestly is probably a better option than LVM, but I have more familiarity with the latter, so anyway...
MAIN QUESTION
So, with all that said, I was thinking about possibly trying to incorporate my external USB drives into the RAID setups. The WD My Passport drives are actually pretty good, for 5400RPM spinning drives, and the bottleneck on single read or write operations is honestly usually the USB PHY. Combining the External USB drives into RAID arrays would likely not offer much of a speed improvement, but rather just bottleneck the USB controllers. But that got me thinking... What if I used the external USB drives in conjunction with the internal drives, in an nonstandard configuration, like either RAID-3 or RAID-4? Basically, using the internal drives as the Data drives, while using the slower external USB drives as the parity drives?
My thinking is that, A) Parity operations are really only relevant on write operations, so reads would not involve the external drives. B) Parity blocks (or bytes) are obviously also smaller than the data itself, so perhaps the write performance would even out as well.
Another option might be to do a RAID-01, with striping across the internal drives, and mirroring onto the external drives (although, in such a case, it would probably make more sense just to do frequent scheduled backups to the external storage.
Does anyone have any experience doing such a nonstandard configuration? I am particularly interested in the RAID-3/4 idea, more as a proof of concept and test of performance than for pure practicality, although I think there could be some practical use if it proved to work. Thanks!