Edit:
So, I think the question still applies below, but I realized that the system is actually booting on the second drive, it's just missing /home, /boot and /etc/fstab. I am sure there is more missing, but I guess the better question now is;
How do you properly boot a system with a failed drive on software raid1 and/or is there any configuration requirements to get this working properly? Is this even possible?
I verified that the uuid for /boot (which is on /dev/md126) matches across both drives (which are /dev/sda6 and /dev/sdb6).
I am trying to install Centos 7 on a 2-drive software raid1 setup. I'd like to install grub on both drives so that if one fails, the system will still boot.
I have /boot
, /home
, /var
and /
are all different partitions. I think the important thing to note is that /boot
is it's own partition and it is ext3.
After installation I install grub to both devices with:
grub2-install /dev/sda
grub2-install /dev/sdb
and redo the grub.cfg
grub2-mkconfig -o /boot/grub2/grub.cfg
Grub appears to be using UUID and not hd0,0
by default.
When I pull one drive and boot, the system gets past the grub, tries to start Gnome, but about about 2 minutes, the system drops from the GDM loading screen to a shell with a few errors;
One thing of note though is that /home
and /boot
don't exist. /etc/fstab
doesn't exist either. The shell complains about a uuid mount point not being found either.
I'm not sure what else is required, but I would like the system to still boot with one of the two drives from the raid1.
references:
https://newbedev.com/how-to-correctly-install-grub-on-a-soft-raid-1
https://unix.stackexchange.com/questions/230349/how-to-correctly-install-grub-on-a-soft-raid-1