Score:2

How is Hardware RAID for NVME drives configured on SuperMicro?

in flag

I've purchased as x13sae motherboard from supermicro which is advertised as

M.2 Interface: 3 PCIe 4.0 x4 (RAID 0, 1, 5) M.2 Form Factor: 2280 M.2 Key: M-Key

I have two NVME dries in my m.2 ports. I want to put them RAID 1 so I can weather a drive failure. I don't see the option in the BIOS. Where do you configure this? I am using Linux. Is there a utility to configure hardware RAID?

Zac67 avatar
ru flag
That's no hardware RAID but "host RAID", ie. software RAID with driver in BIOS.
Score:6
za flag

Manual (available on the "Resources" page under your link) states there is a SATA RAID. I strongly suspect that is so called "fake" RAID, e.g. not a true hardware RAID. There is no explanation on how to set it up, but there is a link which targets to some collection of manuals, but only SATA or SAS RAIDs on SuperMicro motherboards.

There is a single mention of the RAID regarding these M.2 ports (out of 12 total mentions), on the page 52, in the form of the table column. No explanation of how to use it or where to look whatsoever.

You have to ask SuperMicro about that. Good luck.

Or... Are you going to use Linux? Then use its MD RAID. Or use one of its "RAID-enabled" file systems, BTRFS or ZFS. That's better than any "hardware RAID" that could be ever build into such a motherboard anyway. Seriously.

in flag
Why do you feel like software raid is better for mirroring then hardware raid when you only have 2 NVME slots.
Nikita Kipriyanov avatar
za flag
Because, working for more than 15 years with servers, I never saw a single case of such a motherboard having a true hardware RAID. They all happened to be "fake" and when you boot Linux with such a "RAID" it just sees two individual drives with DDF metadata and runs it with it's own MD RAID driver. I don't think this will be an exception. But in that case, better to use OS's own RAID metadata; it's more flexible and has features that are missing in DDF (e.g. write-intent bitmap). And, filesystem-based RAID does not mirror unused space, which is especially beneficial with SSD's.
Andrew Henle avatar
ph flag
@NikitaKipriyanov I've seen a few with actual motherboard HW RAID- IBM/Lenovo, IIRC. But if you booted Linux and ran `lspci` you'd see a real LSI RAID device, or maybe an Adaptec one, and there was usually a BIOS RAID manager you could get into when booting. I'm pretty sure Sun x86 systems had them too.
mx flag
@EvanCarroll Because in a lot of cases, soft-RAID actually is better. It invariably beats the firmware RAID found on most motherboards both in terms of features and performance, and in my experience at least it also beats most hardware RAID in terms of features too. Also, BTRFS and ZFS easily beat all but the best hardware RAID because they can tell you _which copy is wrong_ and will even fix it for you transparently in most cases.
Mark avatar
tz flag
@AustinHemmelgarn, there's one feature of hardware RAID that software RAID generally can't duplicate: you can boot from it. It takes careful design to get a bootable software RAID, and even then, you're usually limited to RAID 1.
Nikita Kipriyanov avatar
za flag
@AndrewHenle I've seen too, HPE for example. But here we talking about SuperMicro entry segment mobo.
Nikita Kipriyanov avatar
za flag
@Mark just look how Proxmox sets up booting when installed on ZFS — that's the example everybody should follow. This actually is better than "from HW RAID". And, you aren't limited to *any* RAID level; you can *partition* drives and set up different levels on different parts; this is how Synology DSM works. This is so much more flexible than HW RAID, so it is incomparable.
marcelm avatar
ng flag
@EvanCarroll Because the raid provided by Intels chipset is what we call "hostraid" or "fakeraid". It is 100% implemented in software, with a little support from the BIOS for booting. It offers [no advantages over Linux software raid](https://superuser.com/questions/245928/does-fake-raid-offer-any-advantage-over-software-raid), in fact quite the contrary.
mx flag
@Mark I would hardly call ensuring that you’re doing sane things to begin with ‘careful design’. The only special advantage that HW RAID has is that you don’t have to replicate the boot sector manually. A good bootloader, such as GRUB, can boot just fine from LVM, BTRFS, or ZFS with almost any RAID configuration you can imagine for them, though it’s debatable why you would want anything but simple replication for your boot volume.
Andrew Henle avatar
ph flag
@NikitaKipriyanov Hardware RAID has one important feature that software RAID doesn't: easy drive replacement. Pop out the drive with the red/orange light, put in a new one, HW RAID will sync it automatically. That's a hugely important feature for server farms with hundreds or thousands of systems where failed drives are replaced under warranty by a vendor tech at random hours. When you only have a handful of sysadmins and that many servers, if you use software RAID the sysadmins would be spending way too much of their limited time doing drive replacement.
Nikita Kipriyanov avatar
za flag
First of all, nobody forbids you from implementing this with software RAIDs too. I'd tell you one big secret which HW RAID manufacturers try hard to hide: **all** RAIDs are software, including theirs, and their actions at drive replacement is just a program which one can easily implement in Linux; and, there *are* implementations (see for example Synology DSM). Flexible again: you can do that with much more special policy than "put a drive and it begin sync". And, I am sure we're not talking about a big server farm here and about vendor warranty.
Score:5
ca flag

Try following the steps outlined here. In short:

  1. press DEL during boot to enter BIOS
  2. go to Advanced > Chipset Configuration > North Bridge > IIO Configuration > Intel VMD Technology
  3. set NVMe Mode Switch to manual and configure your devices.

That said, Intel's integrated RAID (both in RapidStorage and VROC versions) is little more than firmware-based "fake" RAID with custom disk metadata but no dedicated RAID hardware (it crucially lacks any sort of powerloss-protected writeback cache). On Linux, it is managed by the very same mdadm tool used for common software-based RAID.

The only advantage it has over pure software RAID is that it provides a single redundant boot devices versus two mirrored devices. With plain software RAID, one can have an unbootable server if the disk owning the bootsector / bootloader fails. However, one can easily overcome this issue by manually installing the bootsector / bootloader on both physical disks (ie: via grub-install).

All things considered, I would simply use Linux software RAID without caring for BIOS/UEFI-based "fake" RAID.

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.