Required Note: I work for Dell
In order to get the system to "see" this first SSD, I first had to use the BIOS to create a Virtual Disk representing that Physical Disk. This was frustrating - I had no need or desire for a Virtual Disk - but I was content to accept it in the interests of getting the server up and running.
What you're describing is the normal behavior for the vast majority (all?) RAID cards. If you don't want virtual disks you ideally don't want a server with a RAID card then as virtual disks are the entire purpose of the RAID card. As Nikita mentioned though, you can change the controller mode.
In Dell terminology it is called HBA (host bust adapter) mode. On iDRAC 8 you do that here if the card supports it:
I would be surprised to hear that adding a drive to a server requires creating a new Virtual Disk to represent it, since AFAICT that requires accessing the BIOS, which in turn requires attaching a monitor and keyboard to the server.
No, you do not have to access BIOS to configure RAID. You usually configure the RAID through the iDRAC/IPMI/ILO for Dell/SuperMicro/HPE respectively.
In large datacenters we usually do the configuration programatically through the redfish API. Dell publishes a Terraform module, Ansible modules, Python, plus a standard RESTAPI guide - all of which will let you automatically configure the RAID. Any of your large production environment leverage one of those, often through a thing called OpenManage Enterprise.
Here is some Python for example that creates a virtual disk automatically.
This question suggests that the choice is "all-or-nothing" - I can either have all RAID-enabled disks, or not at all.
Correct, this is how most RAID cards work. Mixed mode, doing HBA and RAID mode at the same time, does exist but is rare. In 2023 there really isn't a production use case I can think of. It used to be you might have something like a server for video processing where you could have RAID for all the data and direct drive access for live editing, database servers with tiered storage or cache drives, etc but the plummeting cost of NVMe drives has pretty much eliminated that use case.
A word of warning: turning on HBA mode blows away all existing data.
A bit on how RAID works
In most servers with RAID you will see that all the drives will be physically wired into a PCIe card towards the back of the server. It depends on exactly what configuration you have but with Dell there's usually what we call the backplane where all the drives plug in from the front and then two blue cables (usually SAS cables) that run from that backplane to the RAID card.
For this reason, you can't just bypass the RAID card because it is electrically between you and the drives.
Switching from RAID mode to HBA mode breaks the drives because of how data is arranged in RAID mode. A full explanation of how RAID works is a bit too long but the BLUF version is that the RAID card arranges the data in a special way for redundancy according to an algorithm. When you turn HBA mode on you disable that algorithm from running and the drives are accessed directly. Since the direct access doesn't know how to read the patterns of data that were previously there it just sees garbage it doesn't know what to do with which means the effective destruction of the data short of performing a forensic analysis to recreate the original.