Score:3

How to directly add an SSD to a PowerEdge server without entering BIOS or creating a Virtual Disk?

tr flag

I have a Dell PowerEdge R430, booting to Debian from an SSD. In order to get the system to "see" this first SSD, I first had to use the BIOS to create a Virtual Disk representing that Physical Disk. This was frustrating - I had no need or desire for a Virtual Disk - but I was content to accept it in the interests of getting the server up and running.

Now I'm trying to add a second SSD to the server for more storage. I've physically installed the drive following the instructions here, the drive bay shows a green status light, and I've rebooted "just in case", but the drive still doesn't show up in the output of lsblk (which is how I am accustomed to identifying external hard drives to then mount with mount//etc/fstab).

I would be surprised to hear that adding a drive to a server requires creating a new Virtual Disk to represent it, since AFAICT that requires accessing the BIOS, which in turn requires attaching a monitor and keyboard to the server.

Resources I've consulted for advice:

  • How Reconfigure [sic] a Virtual disk or add additional hard drives - describes how to add hard drives to a Virtual Drive; but not how to add hard drives to the server directly, bypassing the concept of Virtual Drives.
  • The aforementioned hotswap instructions
  • PowerEdge Tutorials: Physical Disks and RAID Controller (PERC) on Servers is an index which links to the prior two pages, among others. I note that it says "A RAID controller can prevent [sic - presumably 'present'] groups of physical disks to the operating system for which data protection schemes such as RAID 5 or RAID 10 can be defined to protect and guarantee data integrity."
  • This question suggests that the choice is "all-or-nothing" - I can either have all RAID-enabled disks, or not at all. I tried pretty hard to bypass a RAID setup while setting up the server initially, so I suspect that this option is not open to me - and, in any case, I suspect this would wipe the existing server setup I have on the original disk, so this is not an attractive option (and, again, it requires digging out a keyboard and monitor to access the BIOS)
  • This answer suggests that it's possible, but gives no indication of how.
  • This answer suggests that a similar approach is possible - "It will not be part of any existing RAID arrays. It will not have redundancy. But the PowerEdge will be able to access it" - (though that's dealing with a different model of server, and a 3.5" bay rather than the 2.5" SSDs I am using)
Nikita Kipriyanov avatar
za flag
What you want is to enable so called JBOD mode. In that mode, LSI/Avago/Broadcom MegaRAID (which PERC is) could work as simple HBA, forwarding raw disk access to the OS. The support matrix is [available on Broadcom web site](https://www.broadcom.com/support/knowledgebase/1211161496893/megaraid-3ware-and-hba-support-for-various-raid-levels-and-jbod-). Unfortunally, as far as I remember, the Dell version of firmware in PERC in R430 has this mode disabled. You can check that for your server with Broadcom utilites such as `megacli -adpallinfo -a0`.
Nikita Kipriyanov avatar
za flag
The same utility can change the setting if it's permitted by the firmware. Last time I configured MegaRAID SAS we also did that change with its built-in setup utility (which you can enter into with a keystroke during boot), but it has its original firmware, not Dell's version.
Score:13
mx flag

Required Note: I work for Dell

In order to get the system to "see" this first SSD, I first had to use the BIOS to create a Virtual Disk representing that Physical Disk. This was frustrating - I had no need or desire for a Virtual Disk - but I was content to accept it in the interests of getting the server up and running.

What you're describing is the normal behavior for the vast majority (all?) RAID cards. If you don't want virtual disks you ideally don't want a server with a RAID card then as virtual disks are the entire purpose of the RAID card. As Nikita mentioned though, you can change the controller mode.

In Dell terminology it is called HBA (host bust adapter) mode. On iDRAC 8 you do that here if the card supports it:

enter image description here

I would be surprised to hear that adding a drive to a server requires creating a new Virtual Disk to represent it, since AFAICT that requires accessing the BIOS, which in turn requires attaching a monitor and keyboard to the server.

No, you do not have to access BIOS to configure RAID. You usually configure the RAID through the iDRAC/IPMI/ILO for Dell/SuperMicro/HPE respectively.

In large datacenters we usually do the configuration programatically through the redfish API. Dell publishes a Terraform module, Ansible modules, Python, plus a standard RESTAPI guide - all of which will let you automatically configure the RAID. Any of your large production environment leverage one of those, often through a thing called OpenManage Enterprise.

Here is some Python for example that creates a virtual disk automatically.

This question suggests that the choice is "all-or-nothing" - I can either have all RAID-enabled disks, or not at all.

Correct, this is how most RAID cards work. Mixed mode, doing HBA and RAID mode at the same time, does exist but is rare. In 2023 there really isn't a production use case I can think of. It used to be you might have something like a server for video processing where you could have RAID for all the data and direct drive access for live editing, database servers with tiered storage or cache drives, etc but the plummeting cost of NVMe drives has pretty much eliminated that use case.

A word of warning: turning on HBA mode blows away all existing data.

A bit on how RAID works

In most servers with RAID you will see that all the drives will be physically wired into a PCIe card towards the back of the server. It depends on exactly what configuration you have but with Dell there's usually what we call the backplane where all the drives plug in from the front and then two blue cables (usually SAS cables) that run from that backplane to the RAID card.

For this reason, you can't just bypass the RAID card because it is electrically between you and the drives.

Switching from RAID mode to HBA mode breaks the drives because of how data is arranged in RAID mode. A full explanation of how RAID works is a bit too long but the BLUF version is that the RAID card arranges the data in a special way for redundancy according to an algorithm. When you turn HBA mode on you disable that algorithm from running and the drives are accessed directly. Since the direct access doesn't know how to read the patterns of data that were previously there it just sees garbage it doesn't know what to do with which means the effective destruction of the data short of performing a forensic analysis to recreate the original.

ws flag
I don't know about the specific hardware used by the OP, however in my experience switching an existing system from HW RAID to HBA, the OS won't be able to read the original disk.
Grant Curell avatar
mx flag
I can bold it if you think it is necessary but in my post I mentioned "A word of warning: turning on HBA mode blows away all existing data."
ws flag
Doh! sorry missed that. So data on disk is changed / its impossible to recover from?
Grant Curell avatar
mx flag
It has been bolded . Yes, I'lll expand the how RAID works section to answer why
Grant Curell avatar
mx flag
I updated the section "a bit on how RAID works" to explain that
tr flag
This might be the most helpful and comprehensive answer I've ever received on the Internet - thanks a ton! I'll comment back later once I actually have time to execute this ("sadly" this isn't actually my job, it's just a hobby homeserver :P ), but I see no reason not to accept this immediately - I doubt anyone's going to top this! Thanks again.
tr flag
No luck, unfortunately - I've cannot get iDRAC accessible on my machine (it doesn't show up in the DHCP leases, and when I set the IP manually then attempts to connect to it just hang indefinitely). I was able to create a Virtual Disk via a direct keyboard/monitor connection to the machine in the BIOS setup, so that worked, but it'll be an arse to have to do that every time. I'll keep trying to figure out how to get iDRAC working.
tr flag
(I did have to use `sudo swapoff -U <UUID-of-/dev/sdb3>` to then be able to `sudo mkfs -t ext4 /dev/sdb3` and mount it - for some reason it initially got mounted as swap space)
Grant Curell avatar
mx flag
If you can hit the iDRAC with the console but not over the network it means you have a network problem. Try pinging the iDRAC IP. If it’s pingable then try nmap with -p 443. I’d be willing to bet one or both don’t work.
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.