I recently had a drive fail in my RAID 5 array with 3 disks and 1 spare. The spare was used automatically. All seems to be working OK.
I then added a new disk as the new spare like this, where sdb
was a working drive in the array and sdc
is the new one as reported by dmesg
:
sfdisk -d /dev/sdb | sfdisk /dev/sdc
mdadm --manage /dev/md0 --add /dev/sdc1
However, becuase I forgot to add the parition number on that last command, it means the partitioning scheme isn't the same for the spare as it is the others, and /proc/mdstat
looks like this now:
md0 : active raid5 sdc[4](S) sdd1[5] sdb1[3] sde1[6]
955537408 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
bitmap: 0/4 pages [0KB], 65536KB chunk
lsblk
shows this:
sdb 8:16 0 465.8G 0 disk
└─sdb1 8:17 0 455.8G 0 part
└─md0 9:0 0 911.3G 0 raid5 /mnt/nas
sdc 8:32 0 465.8G 0 disk
├─sdc1 8:33 0 455.8G 0 part
└─md0 9:0 0 911.3G 0 raid5 /mnt/nas
sdd 8:48 0 465.8G 0 disk
└─sdd1 8:49 0 455.8G 0 part
└─md0 9:0 0 911.3G 0 raid5 /mnt/nas
sde 8:64 0 465.8G 0 disk
└─sde1 8:65 0 455.8G 0 part
└─md0 9:0 0 911.3G 0 raid5 /mnt/nas
Is it a problem that the spare disk is different in the event that it needs to be added? Should I remove it and re-add it so that it's the same?