I have done something like this using AWS ephemeral disks, which are very fast but do not survive a power off/on cycle.
We had a "seed-disk" which was a normal cheap EBS volume of GP2 (GP3 now) and it was in a RAID1 with the fast ephemeral disks
I created a bash script for rc.local to figure out with the nvme list
command output if there was an ephemeral disk, and join it to the raid where appropriate.
In your case, something at startup would have to create the ramdisk, join it to the existing degraded array.
PROD pathservice1.taws ~ $ nvme list
Node SN Model Namespace Usage Format FW Rev
---------------- --- ----------------------------- --------- -------------------- ---------------- --------
/dev/nvme0n1 123 Amazon Elastic Block Store 1 128.85 GB / 128.85 GB 512 B + 0 B 1.0
/dev/nvme1n1 234 Amazon Elastic Block Store 1 107.37 GB / 107.37 GB 512 B + 0 B 1.0
/dev/nvme2n1 345 Amazon Elastic Block Store 1 2.20 TB / 2.20 TB 512 B + 0 B 1.0
/dev/nvme3n1 456 Amazon EC2 NVMe Instance Storage 1 900.00 GB / 900.00 GB 512 B + 0 B 0
/dev/nvme4n1 567 Amazon EC2 NVMe Instance Storage 1 900.00 GB / 900.00 GB 512 B + 0 B 0
The last two are ephemeral disks of 900G each.
- Use the "write-mostly" option on the EBS volume. It will still do reads if the fast disk is absent, or doesn't have those blocks yet. Once the fast disk is populated (or "warmed") then reads will happen there.
The good thing is that writes to the mdX device will persist through orderly reboots and poweroffs. It is possible that unexpected hard power-downs may cause writes to be lost.
So this is a poor substitute for a backup - you should still be doing backups using whatever method works for you.