I set up a mdadm raid5 with 3 drives initially: sdc1, sdd1, sde1. They array creation was successful without issues. Later I decided to extend the array using one more drive. After proper partitioning, I added the device sdf1 to the array and expanded the partition all without issues. I had the drives in an jbod, but I needed to update to a newer jbod to have more available bays. Before I moved the physical drives to he new jbod, I unmounted the array (/dev/md0) and proceeded to remove each device one by one, but I forgot to stop the mdadm process mdadm --stop
.
Now when I spin up all the drives in the new jbod (same server computer), and try to assemble the array, only 3 of them are assembled, except for the last one /dev/sdf1
. Neither drive has any error , and the metadata of them all seems to indicate that they are part of the same array. However, the device /dev/sdf1
is being shown as removed
in the mdadm array and that is the only one that shows the wrong array state .AAA
instead of AA.A
like the other 3 drives.
Is it still possible to get mdadm to recognize this drive as part of the array? So far, my attempts have failed and I keep getting an error message in the logs stating that /dev/sdf1
is a non-fresh drive and is being kicked from the array.